Nov 23 00:06:46 crc systemd[1]: Starting Kubernetes Kubelet... Nov 23 00:06:46 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:46 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 00:06:47 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 23 00:06:48 crc kubenswrapper[4743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 00:06:48 crc kubenswrapper[4743]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 23 00:06:48 crc kubenswrapper[4743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 00:06:48 crc kubenswrapper[4743]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 00:06:48 crc kubenswrapper[4743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 23 00:06:48 crc kubenswrapper[4743]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.411255 4743 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416608 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416639 4743 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416648 4743 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416657 4743 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416666 4743 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416675 4743 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416684 4743 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416691 4743 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416699 4743 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416707 4743 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416714 4743 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416722 4743 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416729 4743 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416737 4743 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416744 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416752 4743 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416759 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416767 4743 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416775 4743 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416796 4743 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416804 4743 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416811 4743 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416818 4743 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416826 4743 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416836 4743 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416846 4743 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416855 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416865 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416873 4743 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416882 4743 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416890 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416899 4743 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416907 4743 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416915 4743 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416924 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416933 4743 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416941 4743 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416949 4743 feature_gate.go:330] unrecognized feature gate: Example Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416957 4743 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416964 4743 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416972 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416979 4743 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416987 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.416995 4743 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417002 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417010 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417017 4743 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417024 4743 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417034 4743 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417042 4743 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417050 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417057 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417064 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417072 4743 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417079 4743 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417089 4743 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417098 4743 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417106 4743 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417113 4743 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417120 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417128 4743 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417138 4743 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417147 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417156 4743 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417168 4743 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417176 4743 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417184 4743 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417192 4743 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417200 4743 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417210 4743 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.417220 4743 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418257 4743 flags.go:64] FLAG: --address="0.0.0.0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418279 4743 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418293 4743 flags.go:64] FLAG: --anonymous-auth="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418304 4743 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418317 4743 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418326 4743 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418337 4743 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418350 4743 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418360 4743 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418369 4743 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418379 4743 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418388 4743 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418397 4743 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418406 4743 flags.go:64] FLAG: --cgroup-root="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418415 4743 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418424 4743 flags.go:64] FLAG: --client-ca-file="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418433 4743 flags.go:64] FLAG: --cloud-config="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418441 4743 flags.go:64] FLAG: --cloud-provider="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418450 4743 flags.go:64] FLAG: --cluster-dns="[]" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418467 4743 flags.go:64] FLAG: --cluster-domain="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418475 4743 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418510 4743 flags.go:64] FLAG: --config-dir="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418519 4743 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418528 4743 flags.go:64] FLAG: --container-log-max-files="5" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418539 4743 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418548 4743 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418557 4743 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418566 4743 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418575 4743 flags.go:64] FLAG: --contention-profiling="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418584 4743 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418594 4743 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418605 4743 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418614 4743 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418624 4743 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418633 4743 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418642 4743 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418650 4743 flags.go:64] FLAG: --enable-load-reader="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418659 4743 flags.go:64] FLAG: --enable-server="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418668 4743 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418679 4743 flags.go:64] FLAG: --event-burst="100" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418688 4743 flags.go:64] FLAG: --event-qps="50" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418696 4743 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418705 4743 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418715 4743 flags.go:64] FLAG: --eviction-hard="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418726 4743 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418734 4743 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418743 4743 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418752 4743 flags.go:64] FLAG: --eviction-soft="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418761 4743 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418769 4743 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418778 4743 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418786 4743 flags.go:64] FLAG: --experimental-mounter-path="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418795 4743 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418804 4743 flags.go:64] FLAG: --fail-swap-on="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418812 4743 flags.go:64] FLAG: --feature-gates="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418823 4743 flags.go:64] FLAG: --file-check-frequency="20s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418831 4743 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418841 4743 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418850 4743 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418859 4743 flags.go:64] FLAG: --healthz-port="10248" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418868 4743 flags.go:64] FLAG: --help="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418878 4743 flags.go:64] FLAG: --hostname-override="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418887 4743 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418896 4743 flags.go:64] FLAG: --http-check-frequency="20s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418906 4743 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418915 4743 flags.go:64] FLAG: --image-credential-provider-config="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418923 4743 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418932 4743 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418940 4743 flags.go:64] FLAG: --image-service-endpoint="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418949 4743 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418957 4743 flags.go:64] FLAG: --kube-api-burst="100" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418966 4743 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418975 4743 flags.go:64] FLAG: --kube-api-qps="50" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418984 4743 flags.go:64] FLAG: --kube-reserved="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.418993 4743 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419001 4743 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419010 4743 flags.go:64] FLAG: --kubelet-cgroups="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419028 4743 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419037 4743 flags.go:64] FLAG: --lock-file="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419047 4743 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419056 4743 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419065 4743 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419078 4743 flags.go:64] FLAG: --log-json-split-stream="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419087 4743 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419096 4743 flags.go:64] FLAG: --log-text-split-stream="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419105 4743 flags.go:64] FLAG: --logging-format="text" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419113 4743 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419122 4743 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419130 4743 flags.go:64] FLAG: --manifest-url="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419139 4743 flags.go:64] FLAG: --manifest-url-header="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419151 4743 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419159 4743 flags.go:64] FLAG: --max-open-files="1000000" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419170 4743 flags.go:64] FLAG: --max-pods="110" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419178 4743 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419212 4743 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419221 4743 flags.go:64] FLAG: --memory-manager-policy="None" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419230 4743 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419239 4743 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419247 4743 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419257 4743 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419277 4743 flags.go:64] FLAG: --node-status-max-images="50" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419286 4743 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419295 4743 flags.go:64] FLAG: --oom-score-adj="-999" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419304 4743 flags.go:64] FLAG: --pod-cidr="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419312 4743 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419327 4743 flags.go:64] FLAG: --pod-manifest-path="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419335 4743 flags.go:64] FLAG: --pod-max-pids="-1" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419344 4743 flags.go:64] FLAG: --pods-per-core="0" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419353 4743 flags.go:64] FLAG: --port="10250" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419362 4743 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419370 4743 flags.go:64] FLAG: --provider-id="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419379 4743 flags.go:64] FLAG: --qos-reserved="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419387 4743 flags.go:64] FLAG: --read-only-port="10255" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419396 4743 flags.go:64] FLAG: --register-node="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419405 4743 flags.go:64] FLAG: --register-schedulable="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419415 4743 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419429 4743 flags.go:64] FLAG: --registry-burst="10" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419438 4743 flags.go:64] FLAG: --registry-qps="5" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419447 4743 flags.go:64] FLAG: --reserved-cpus="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419455 4743 flags.go:64] FLAG: --reserved-memory="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419466 4743 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419475 4743 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419508 4743 flags.go:64] FLAG: --rotate-certificates="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419517 4743 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419526 4743 flags.go:64] FLAG: --runonce="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419535 4743 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419545 4743 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419555 4743 flags.go:64] FLAG: --seccomp-default="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419564 4743 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419573 4743 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419582 4743 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419591 4743 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419600 4743 flags.go:64] FLAG: --storage-driver-password="root" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419609 4743 flags.go:64] FLAG: --storage-driver-secure="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419617 4743 flags.go:64] FLAG: --storage-driver-table="stats" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419626 4743 flags.go:64] FLAG: --storage-driver-user="root" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419635 4743 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419644 4743 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419652 4743 flags.go:64] FLAG: --system-cgroups="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419661 4743 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419676 4743 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419685 4743 flags.go:64] FLAG: --tls-cert-file="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419693 4743 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419705 4743 flags.go:64] FLAG: --tls-min-version="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419713 4743 flags.go:64] FLAG: --tls-private-key-file="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419722 4743 flags.go:64] FLAG: --topology-manager-policy="none" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419731 4743 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419740 4743 flags.go:64] FLAG: --topology-manager-scope="container" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419749 4743 flags.go:64] FLAG: --v="2" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419760 4743 flags.go:64] FLAG: --version="false" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419771 4743 flags.go:64] FLAG: --vmodule="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419782 4743 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.419792 4743 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420007 4743 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420018 4743 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420027 4743 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420035 4743 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420044 4743 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420052 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420062 4743 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420073 4743 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420083 4743 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420093 4743 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420101 4743 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420109 4743 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420117 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420127 4743 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420136 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420144 4743 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420153 4743 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420161 4743 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420169 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420178 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420187 4743 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420195 4743 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420202 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420210 4743 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420220 4743 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420228 4743 feature_gate.go:330] unrecognized feature gate: Example Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420236 4743 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420244 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420251 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420258 4743 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420266 4743 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420274 4743 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420281 4743 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420288 4743 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420299 4743 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420309 4743 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420318 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420326 4743 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420342 4743 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420351 4743 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420359 4743 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420368 4743 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420376 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420384 4743 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420391 4743 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420398 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420407 4743 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420414 4743 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420422 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420429 4743 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420436 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420444 4743 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420452 4743 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420459 4743 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420466 4743 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420473 4743 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420509 4743 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420517 4743 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420524 4743 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420531 4743 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420540 4743 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420548 4743 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420555 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420563 4743 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420570 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420577 4743 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420585 4743 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420592 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420600 4743 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420608 4743 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.420624 4743 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.420649 4743 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.434233 4743 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.434293 4743 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434419 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434432 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434442 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434451 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434460 4743 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434471 4743 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434510 4743 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434519 4743 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434530 4743 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434539 4743 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434549 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434557 4743 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434565 4743 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434573 4743 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434582 4743 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434590 4743 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434597 4743 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434605 4743 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434613 4743 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434621 4743 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434628 4743 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434636 4743 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434644 4743 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434652 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434660 4743 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434668 4743 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434675 4743 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434683 4743 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434693 4743 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434702 4743 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434710 4743 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434719 4743 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434727 4743 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434735 4743 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434745 4743 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434752 4743 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434760 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434768 4743 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434777 4743 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434786 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434794 4743 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434802 4743 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434810 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434818 4743 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434828 4743 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434837 4743 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434846 4743 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434853 4743 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434861 4743 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434868 4743 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434876 4743 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434884 4743 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434891 4743 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434899 4743 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434907 4743 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434915 4743 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434923 4743 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434930 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434938 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434945 4743 feature_gate.go:330] unrecognized feature gate: Example Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434953 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434961 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434968 4743 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434976 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434983 4743 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434991 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.434999 4743 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435006 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435013 4743 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435022 4743 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435032 4743 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.435046 4743 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435305 4743 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435321 4743 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435330 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435338 4743 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435345 4743 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435353 4743 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435361 4743 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435369 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435377 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435386 4743 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435396 4743 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435408 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435417 4743 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435426 4743 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435435 4743 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435445 4743 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435454 4743 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435461 4743 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435470 4743 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435478 4743 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435511 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435522 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435533 4743 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435543 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435553 4743 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435562 4743 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435572 4743 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435582 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435591 4743 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435601 4743 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435609 4743 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435617 4743 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435625 4743 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435633 4743 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435642 4743 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435650 4743 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435657 4743 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435665 4743 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435672 4743 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435680 4743 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435689 4743 feature_gate.go:330] unrecognized feature gate: Example Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435697 4743 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435704 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435712 4743 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435719 4743 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435727 4743 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435735 4743 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435743 4743 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435750 4743 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435757 4743 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435766 4743 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435775 4743 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435783 4743 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435793 4743 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435802 4743 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435812 4743 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435822 4743 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435831 4743 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435839 4743 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435847 4743 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435855 4743 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435864 4743 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435873 4743 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435883 4743 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435892 4743 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435900 4743 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435910 4743 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435918 4743 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435926 4743 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435933 4743 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.435942 4743 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.435955 4743 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.437261 4743 server.go:940] "Client rotation is on, will bootstrap in background" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.444225 4743 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.444392 4743 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.446602 4743 server.go:997] "Starting client certificate rotation" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.446651 4743 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.447654 4743 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 21:16:30.753230842 +0000 UTC Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.447805 4743 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 717h9m42.305431058s for next certificate rotation Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.489359 4743 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.492352 4743 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.515188 4743 log.go:25] "Validated CRI v1 runtime API" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.559011 4743 log.go:25] "Validated CRI v1 image API" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.561831 4743 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.574204 4743 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-23-00-00-53-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.574275 4743 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.603922 4743 manager.go:217] Machine: {Timestamp:2025-11-23 00:06:48.599864996 +0000 UTC m=+0.677963193 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3d2e0a67-330f-4e1b-8e8f-608360b1d20e BootID:0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5b:d8:21 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5b:d8:21 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:95:67:16 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e3:68:d5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:94:bb:e6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:57:93:55 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:bd:b4:29:ee:16 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:22:06:9c:aa:be Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.604354 4743 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.604701 4743 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.607797 4743 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.608190 4743 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.608252 4743 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.608651 4743 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.608671 4743 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.609449 4743 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.609530 4743 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.609828 4743 state_mem.go:36] "Initialized new in-memory state store" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.609979 4743 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.613886 4743 kubelet.go:418] "Attempting to sync node with API server" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.613977 4743 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.614021 4743 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.614050 4743 kubelet.go:324] "Adding apiserver pod source" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.614074 4743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.621366 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.621534 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.621658 4743 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.621842 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.621954 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.622995 4743 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.624805 4743 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627633 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627687 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627708 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627726 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627764 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627779 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627792 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627814 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627830 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627845 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627864 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.627877 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.629355 4743 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.630274 4743 server.go:1280] "Started kubelet" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.631723 4743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.632267 4743 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.632537 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.632677 4743 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 00:06:48 crc systemd[1]: Started Kubernetes Kubelet. Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.644380 4743 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a7a0eef32d6ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 00:06:48.630187757 +0000 UTC m=+0.708285914,LastTimestamp:2025-11-23 00:06:48.630187757 +0000 UTC m=+0.708285914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.645799 4743 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.645853 4743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.645891 4743 server.go:460] "Adding debug handlers to kubelet server" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.646021 4743 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:56:07.924100646 +0000 UTC Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.646064 4743 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 692h49m19.278040684s for next certificate rotation Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.646741 4743 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.647317 4743 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.648164 4743 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.647951 4743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.647398 4743 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.648447 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.648622 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.649970 4743 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.650011 4743 factory.go:55] Registering systemd factory Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.650032 4743 factory.go:221] Registration of the systemd container factory successfully Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.650726 4743 factory.go:153] Registering CRI-O factory Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.650767 4743 factory.go:221] Registration of the crio container factory successfully Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.650810 4743 factory.go:103] Registering Raw factory Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.650846 4743 manager.go:1196] Started watching for new ooms in manager Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.652062 4743 manager.go:319] Starting recovery of all containers Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668559 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668643 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668669 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668694 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668939 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668964 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.668986 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669005 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669027 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669045 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669094 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669113 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669137 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669169 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669190 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669210 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669226 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669244 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669262 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669282 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669300 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669318 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669340 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669361 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669379 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669397 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669419 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669481 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669531 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669550 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669569 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669589 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669608 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669627 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669646 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669665 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669684 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669703 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669721 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669750 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669770 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669788 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669806 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669825 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669843 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669860 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669880 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669899 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669918 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669938 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669958 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669976 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.669999 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670019 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670721 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670798 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670837 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670869 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670893 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670936 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670961 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.670985 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671015 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671040 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671067 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671092 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671117 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671142 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671183 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671209 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671238 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671270 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671297 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671322 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671350 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671377 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671401 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671431 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671460 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671522 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671553 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671580 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671606 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671632 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671662 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671688 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671713 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671739 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671764 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671790 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671816 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671841 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671867 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671893 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671922 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671947 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671972 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.671998 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672029 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672057 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672094 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672122 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672146 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672209 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672246 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672278 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672353 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672383 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672412 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672443 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672475 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672543 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672573 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672603 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.672629 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675018 4743 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675078 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675110 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675136 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675162 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675188 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675212 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675236 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675259 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675286 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675313 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675341 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675368 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675398 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675425 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675451 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675477 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675563 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675642 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675679 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675707 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675737 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675765 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675790 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675816 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675844 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675870 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675898 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675923 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675949 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675975 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.675999 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676024 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676053 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676078 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676104 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676130 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676154 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676217 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676243 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676266 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676297 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676321 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676348 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676375 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676400 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676426 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676450 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676476 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676619 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676711 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676738 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676774 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676800 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676826 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676849 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676874 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676911 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676936 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676962 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.676990 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677017 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677043 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677070 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677096 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677592 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677656 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.677687 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.678824 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.678864 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.678904 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.678927 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.678949 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.678980 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679003 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679035 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679056 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679077 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679104 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679125 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679154 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679177 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679203 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679233 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679256 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679600 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679625 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679644 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679673 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679695 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679723 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679745 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679766 4743 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679785 4743 reconstruct.go:97] "Volume reconstruction finished" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.679800 4743 reconciler.go:26] "Reconciler: start to sync state" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.692916 4743 manager.go:324] Recovery completed Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.709536 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.712190 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.712244 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.712261 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.713215 4743 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.713243 4743 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.713275 4743 state_mem.go:36] "Initialized new in-memory state store" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.717900 4743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.720828 4743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.720902 4743 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.720950 4743 kubelet.go:2335] "Starting kubelet main sync loop" Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.721035 4743 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 00:06:48 crc kubenswrapper[4743]: W1123 00:06:48.724098 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.724228 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.738031 4743 policy_none.go:49] "None policy: Start" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.739061 4743 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.739104 4743 state_mem.go:35] "Initializing new in-memory state store" Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.747823 4743 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.802827 4743 manager.go:334] "Starting Device Plugin manager" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.802891 4743 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.802909 4743 server.go:79] "Starting device plugin registration server" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.803563 4743 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.803583 4743 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.804183 4743 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.804698 4743 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.804779 4743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.819928 4743 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.821212 4743 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.821331 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.822896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.822941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.822956 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.823114 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.823337 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.823419 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.824116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.824161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.824182 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.824382 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.824528 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.824565 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.825020 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.825063 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.825076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.825987 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826025 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826040 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826055 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826183 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826276 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826890 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.826970 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.827279 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.828320 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.828373 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.828390 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.828871 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.828923 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.828938 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.829831 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.829856 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.829890 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.830451 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.830530 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.830555 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.831605 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.831639 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.831655 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.831833 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.831879 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.832680 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.832708 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.832723 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.850174 4743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883454 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883505 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883532 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883551 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883568 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883670 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883717 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883755 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883806 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883869 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883908 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.883997 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.884046 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.884079 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.884111 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.903964 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.905458 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.905561 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.905582 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.905618 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:06:48 crc kubenswrapper[4743]: E1123 00:06:48.906065 4743 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.986072 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.986556 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.986849 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987088 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987319 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987580 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987761 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987172 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.986405 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987377 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.986994 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987658 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.986666 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.987825 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988035 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988416 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988456 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988519 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988528 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988564 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988575 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988543 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988612 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988633 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988671 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988685 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988742 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988829 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988850 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:48 crc kubenswrapper[4743]: I1123 00:06:48.988886 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.106219 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.107911 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.107965 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.107985 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.108021 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:06:49 crc kubenswrapper[4743]: E1123 00:06:49.108578 4743 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.157100 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.179821 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.201150 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.222371 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-04b7a6fe50e21c5e3e11d2632128bb61ad5b37ce8d29ac5ff0faacdf8210907d WatchSource:0}: Error finding container 04b7a6fe50e21c5e3e11d2632128bb61ad5b37ce8d29ac5ff0faacdf8210907d: Status 404 returned error can't find the container with id 04b7a6fe50e21c5e3e11d2632128bb61ad5b37ce8d29ac5ff0faacdf8210907d Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.223305 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ce0456ccc597108f5d2206f4a5e257e57733754988b7545ae59d178a642ea0ed WatchSource:0}: Error finding container ce0456ccc597108f5d2206f4a5e257e57733754988b7545ae59d178a642ea0ed: Status 404 returned error can't find the container with id ce0456ccc597108f5d2206f4a5e257e57733754988b7545ae59d178a642ea0ed Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.227399 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.229283 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-514ca5572b05d5d736938700f31a0fc8b55c3fab738e770084aeb938fdcd60bc WatchSource:0}: Error finding container 514ca5572b05d5d736938700f31a0fc8b55c3fab738e770084aeb938fdcd60bc: Status 404 returned error can't find the container with id 514ca5572b05d5d736938700f31a0fc8b55c3fab738e770084aeb938fdcd60bc Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.239112 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:49 crc kubenswrapper[4743]: E1123 00:06:49.251966 4743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.252040 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-834969da5d37e2f80748ec14893cc07a3cca3568cc2d60e8ddf139a1b5b708fd WatchSource:0}: Error finding container 834969da5d37e2f80748ec14893cc07a3cca3568cc2d60e8ddf139a1b5b708fd: Status 404 returned error can't find the container with id 834969da5d37e2f80748ec14893cc07a3cca3568cc2d60e8ddf139a1b5b708fd Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.276109 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a9e497fe7324fe6a41373caef758f2b4945d1e7ba6a84b8b6d11b3910135c12a WatchSource:0}: Error finding container a9e497fe7324fe6a41373caef758f2b4945d1e7ba6a84b8b6d11b3910135c12a: Status 404 returned error can't find the container with id a9e497fe7324fe6a41373caef758f2b4945d1e7ba6a84b8b6d11b3910135c12a Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.508796 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.510942 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.510993 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.511007 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.511047 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:06:49 crc kubenswrapper[4743]: E1123 00:06:49.511761 4743 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.579138 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:49 crc kubenswrapper[4743]: E1123 00:06:49.579346 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.634177 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.728508 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"834969da5d37e2f80748ec14893cc07a3cca3568cc2d60e8ddf139a1b5b708fd"} Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.730277 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"514ca5572b05d5d736938700f31a0fc8b55c3fab738e770084aeb938fdcd60bc"} Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.731528 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ce0456ccc597108f5d2206f4a5e257e57733754988b7545ae59d178a642ea0ed"} Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.734219 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"04b7a6fe50e21c5e3e11d2632128bb61ad5b37ce8d29ac5ff0faacdf8210907d"} Nov 23 00:06:49 crc kubenswrapper[4743]: I1123 00:06:49.735420 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9e497fe7324fe6a41373caef758f2b4945d1e7ba6a84b8b6d11b3910135c12a"} Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.840863 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:49 crc kubenswrapper[4743]: E1123 00:06:49.841003 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:49 crc kubenswrapper[4743]: W1123 00:06:49.887642 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:49 crc kubenswrapper[4743]: E1123 00:06:49.887759 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:50 crc kubenswrapper[4743]: E1123 00:06:50.053553 4743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Nov 23 00:06:50 crc kubenswrapper[4743]: W1123 00:06:50.093049 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:50 crc kubenswrapper[4743]: E1123 00:06:50.093178 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.312420 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.314009 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.314107 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.314125 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.314164 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:06:50 crc kubenswrapper[4743]: E1123 00:06:50.314860 4743 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.633397 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.742834 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a"} Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.745894 4743 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76" exitCode=0 Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.746078 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.746232 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76"} Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.747635 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.747683 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.747702 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.749567 4743 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066" exitCode=0 Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.749627 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066"} Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.749671 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.751155 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.751216 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.751236 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.752655 4743 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6" exitCode=0 Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.752714 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6"} Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.752829 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.754567 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.754616 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.754629 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.754802 4743 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="56358066a1a48d72736b08cadd4048dcd41b5c0d8f62aa3c1eadb3c3adc376e3" exitCode=0 Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.754836 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"56358066a1a48d72736b08cadd4048dcd41b5c0d8f62aa3c1eadb3c3adc376e3"} Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.754986 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.760042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.760128 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.760153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.762688 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.764686 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.764722 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:50 crc kubenswrapper[4743]: I1123 00:06:50.764740 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:51 crc kubenswrapper[4743]: W1123 00:06:51.587015 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:51 crc kubenswrapper[4743]: E1123 00:06:51.587134 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.634205 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:51 crc kubenswrapper[4743]: E1123 00:06:51.654672 4743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.760953 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.761004 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.761016 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.764153 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.764179 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.764196 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.765660 4743 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="916330324d3654ea5f620d89b9a427d5807653423e1bc66a0bc3d7c0ee52ef3d" exitCode=0 Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.765688 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"916330324d3654ea5f620d89b9a427d5807653423e1bc66a0bc3d7c0ee52ef3d"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.765789 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.767022 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.767068 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.767084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.769648 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.769691 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.769704 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.769729 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.770431 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.770459 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.770468 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.771954 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5e56c77b063825f42a5134699a4e67ab4bb0f3f48f7fa7521e091156c6f63504"} Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.772014 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.772818 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.772857 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.772868 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.915454 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.919622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.919659 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.919669 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:51 crc kubenswrapper[4743]: I1123 00:06:51.919692 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:06:51 crc kubenswrapper[4743]: E1123 00:06:51.920083 4743 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.339676 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.633621 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:52 crc kubenswrapper[4743]: W1123 00:06:52.735013 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:52 crc kubenswrapper[4743]: E1123 00:06:52.735140 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.779011 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4"} Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.779421 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0"} Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.779127 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.780731 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.780799 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.780820 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.782128 4743 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="18ed5a5cefeb2682a9f6dfdb3eea4491c8f973ef64e61b1e4721ed1e4943da97" exitCode=0 Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.782243 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.782253 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"18ed5a5cefeb2682a9f6dfdb3eea4491c8f973ef64e61b1e4721ed1e4943da97"} Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.782389 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.782412 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.782608 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.783098 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.783141 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.783153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784048 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784105 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784137 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784414 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784542 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.784618 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:52 crc kubenswrapper[4743]: I1123 00:06:52.785783 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:53 crc kubenswrapper[4743]: W1123 00:06:53.063689 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:53 crc kubenswrapper[4743]: E1123 00:06:53.063825 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.172714 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:06:53 crc kubenswrapper[4743]: W1123 00:06:53.251119 4743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:53 crc kubenswrapper[4743]: E1123 00:06:53.251240 4743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.634336 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.791642 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0aa53db5e48541274d532c593c3cf33aea41d9329aa7a25e35803ea2854274e3"} Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.791710 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"038ac58c80d07351f92865d326362026ca8ecc38c92dce1e4df44aa236a36223"} Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.793689 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.796090 4743 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4" exitCode=255 Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.796188 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4"} Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.796265 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.796284 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.796353 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798135 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798091 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798411 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.798434 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:53 crc kubenswrapper[4743]: I1123 00:06:53.800108 4743 scope.go:117] "RemoveContainer" containerID="ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4" Nov 23 00:06:54 crc kubenswrapper[4743]: E1123 00:06:54.080957 4743 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a7a0eef32d6ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 00:06:48.630187757 +0000 UTC m=+0.708285914,LastTimestamp:2025-11-23 00:06:48.630187757 +0000 UTC m=+0.708285914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.142090 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.805035 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.807614 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70"} Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.807652 4743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.807716 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.808778 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.808809 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.808818 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.813108 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b272d18651f2645902da5305e7c223c5b282994977f347c4d9a4d39e7ceae26b"} Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.813182 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.813224 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d7329979bf15a7e36fbef2a46759df2324ac9cfb088a768192f44319cb7f3131"} Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.813146 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.813530 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e6810fc1c4651b2076cc9637261a24720d02d16dc57e72805aaedb25c13e8b69"} Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.814604 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.814631 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.814640 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.814713 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.814757 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:54 crc kubenswrapper[4743]: I1123 00:06:54.814775 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.120731 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.122158 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.122204 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.122212 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.122244 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.298314 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.816173 4743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.816229 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.816245 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.817594 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.817654 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.817671 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.817796 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.817829 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:55 crc kubenswrapper[4743]: I1123 00:06:55.817840 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:56 crc kubenswrapper[4743]: I1123 00:06:56.530077 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:56 crc kubenswrapper[4743]: I1123 00:06:56.818902 4743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 00:06:56 crc kubenswrapper[4743]: I1123 00:06:56.818984 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:56 crc kubenswrapper[4743]: I1123 00:06:56.820252 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:56 crc kubenswrapper[4743]: I1123 00:06:56.820309 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:56 crc kubenswrapper[4743]: I1123 00:06:56.820334 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.143034 4743 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.143151 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.449571 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.538316 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.538612 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.540380 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.540477 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.540539 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.822912 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.824345 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.824410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:57 crc kubenswrapper[4743]: I1123 00:06:57.824431 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.443890 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.444067 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.445367 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.445424 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.445448 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.460330 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:58 crc kubenswrapper[4743]: E1123 00:06:58.820204 4743 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.825793 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.825965 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.826180 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.827388 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.827860 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.827895 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.828035 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.828084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:58 crc kubenswrapper[4743]: I1123 00:06:58.828106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.122438 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.122734 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.124479 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.124629 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.124651 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.828648 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.829992 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.830044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:06:59 crc kubenswrapper[4743]: I1123 00:06:59.830063 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:04 crc kubenswrapper[4743]: I1123 00:07:04.635049 4743 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 23 00:07:04 crc kubenswrapper[4743]: I1123 00:07:04.818457 4743 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 00:07:04 crc kubenswrapper[4743]: I1123 00:07:04.818558 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 00:07:04 crc kubenswrapper[4743]: I1123 00:07:04.822746 4743 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 23 00:07:04 crc kubenswrapper[4743]: I1123 00:07:04.822837 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 00:07:05 crc kubenswrapper[4743]: I1123 00:07:05.305442 4743 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]log ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]etcd ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/generic-apiserver-start-informers ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/priority-and-fairness-filter ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-apiextensions-informers ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-apiextensions-controllers ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/crd-informer-synced ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-system-namespaces-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 23 00:07:05 crc kubenswrapper[4743]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 23 00:07:05 crc kubenswrapper[4743]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/bootstrap-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/start-kube-aggregator-informers ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-registration-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-discovery-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]autoregister-completion ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-openapi-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 23 00:07:05 crc kubenswrapper[4743]: livez check failed Nov 23 00:07:05 crc kubenswrapper[4743]: I1123 00:07:05.305547 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.142821 4743 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.143037 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.648256 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.648873 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.650722 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.650793 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.650816 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.670436 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.857376 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.858835 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.858886 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:07 crc kubenswrapper[4743]: I1123 00:07:07.858911 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:08 crc kubenswrapper[4743]: E1123 00:07:08.820403 4743 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.797094 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.797301 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:07:09 crc kubenswrapper[4743]: E1123 00:07:09.798084 4743 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.799207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.799289 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.799317 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.799554 4743 trace.go:236] Trace[473964521]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 00:06:58.887) (total time: 10911ms): Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[473964521]: ---"Objects listed" error: 10911ms (00:07:09.799) Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[473964521]: [10.911590754s] [10.911590754s] END Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.799585 4743 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 23 00:07:09 crc kubenswrapper[4743]: E1123 00:07:09.805456 4743 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.806305 4743 trace.go:236] Trace[1369537966]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 00:06:59.156) (total time: 10649ms): Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[1369537966]: ---"Objects listed" error: 10649ms (00:07:09.805) Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[1369537966]: [10.649734893s] [10.649734893s] END Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.806453 4743 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.807740 4743 trace.go:236] Trace[1911780968]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 00:06:58.833) (total time: 10974ms): Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[1911780968]: ---"Objects listed" error: 10973ms (00:07:09.807) Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[1911780968]: [10.974155139s] [10.974155139s] END Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.807773 4743 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.808229 4743 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.808242 4743 trace.go:236] Trace[49518465]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 00:06:57.812) (total time: 11995ms): Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[49518465]: ---"Objects listed" error: 11995ms (00:07:09.808) Nov 23 00:07:09 crc kubenswrapper[4743]: Trace[49518465]: [11.99526821s] [11.99526821s] END Nov 23 00:07:09 crc kubenswrapper[4743]: I1123 00:07:09.808266 4743 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.119897 4743 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53954->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.119948 4743 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53940->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.119982 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53954->192.168.126.11:17697: read: connection reset by peer" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.120059 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53940->192.168.126.11:17697: read: connection reset by peer" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.308830 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.309553 4743 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.309638 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.312773 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.630322 4743 apiserver.go:52] "Watching apiserver" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.632276 4743 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.632718 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.633138 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.634075 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.634191 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.634265 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.634953 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.635588 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.635667 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.636234 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.636293 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.637197 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641083 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641128 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641236 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641250 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641342 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641614 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641741 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.641781 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.649136 4743 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.661415 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.677607 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.692642 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.704207 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.712914 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.712992 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713016 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713052 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713071 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713089 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713122 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713140 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713159 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713175 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.713865 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714008 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714121 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714224 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714337 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714510 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714625 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714783 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714883 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714984 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715080 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715193 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715298 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715395 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715511 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715622 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715724 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715829 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715931 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716030 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716133 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.714329 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.715032 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716045 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716197 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716456 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716586 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716695 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716811 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716921 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717024 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717139 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717244 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717337 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717425 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717544 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717651 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717748 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717846 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717937 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718034 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718138 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718248 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718352 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718479 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718604 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718707 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718795 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718895 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718990 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719089 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719192 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719286 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719388 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719511 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719654 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719777 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719893 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720003 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720106 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720209 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720314 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720428 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720579 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720708 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720806 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720907 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720997 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721086 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721192 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721279 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721364 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721449 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721555 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721651 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721745 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721841 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.721928 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.722007 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.722103 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.722182 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.722273 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.722360 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.722477 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736206 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736380 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736442 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736514 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736551 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736597 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736640 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736679 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736723 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736776 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736812 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736861 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736905 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736943 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736975 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737023 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737064 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737091 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737115 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737141 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737166 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737192 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737224 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737248 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737275 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737309 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737346 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737372 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737409 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737444 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737537 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737578 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737629 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716558 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716643 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716656 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716876 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716896 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.716969 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717103 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717208 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717264 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717421 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717440 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.717665 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718000 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718012 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718170 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718303 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718333 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718551 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.718962 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719303 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719453 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719658 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.719983 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720124 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720599 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720603 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720810 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720838 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720880 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.720944 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723018 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723118 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723217 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723339 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723516 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723666 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.723851 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.724031 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.724274 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.724279 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.724823 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.724897 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.725050 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.725158 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.725308 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.725648 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.727767 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.728211 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.728295 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.728445 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.728684 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.734582 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.735919 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736385 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736396 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737811 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.737960 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.738393 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.738612 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.738629 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.738168 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742154 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742340 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742408 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742441 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742466 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742502 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742539 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742567 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742591 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742613 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742638 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742663 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742688 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742715 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742739 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742764 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742789 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742812 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742847 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742869 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742892 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742912 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742935 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742957 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742975 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.742997 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743017 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743041 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743060 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743083 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743106 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743128 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743152 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743174 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743194 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743215 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743243 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743263 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743284 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743306 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743328 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743350 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743373 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743396 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743428 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743448 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743471 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743517 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743564 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743587 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743610 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743637 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743660 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743686 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743712 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743732 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743754 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743778 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743798 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743828 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743861 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743893 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743919 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743950 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.743975 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744000 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744028 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744056 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744077 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744098 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744132 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744154 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744173 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744791 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.744834 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745153 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745266 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745336 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745366 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745397 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745471 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745550 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745575 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745603 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745629 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745728 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745795 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.745997 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.746028 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.746284 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.748554 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.736167 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.748691 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.748880 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.748929 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.749757 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.750001 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.750360 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.750960 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.751237 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.751378 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.751866 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.753744 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.756744 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.757084 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.752287 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.753473 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.755448 4743 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.758756 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.758864 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.758941 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.758995 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.759023 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.761905 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.762348 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.762650 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:11.262605806 +0000 UTC m=+23.340703933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.762689 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.757678 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.764030 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.764111 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.764462 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.763582 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.764967 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.765268 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.765632 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.765919 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.766098 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.766266 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.786675 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.791538 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.782503 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.782579 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.782678 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.782706 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.782776 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.782877 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783040 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783229 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783277 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783351 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783423 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783443 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783442 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783688 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.784887 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.804846 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.785005 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.805034 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.805052 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.805382 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.805761 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.805753 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.806009 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.784771 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.797120 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.757392 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.785732 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:11.285692284 +0000 UTC m=+23.363790411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.806414 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:07:11.306386968 +0000 UTC m=+23.384485095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.806467 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:11.30645829 +0000 UTC m=+23.384556417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.785825 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.786064 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.786313 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.796744 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.797134 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.797611 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.797636 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.797924 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.798338 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.798282 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.798533 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.798913 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.798955 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.799624 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.798975 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.804593 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.813127 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.813328 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.813433 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:11.313398435 +0000 UTC m=+23.391496562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.813427 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.813638 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.813643 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.813794 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.813944 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.814216 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.814733 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.814804 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.815415 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.815742 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.815989 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.816154 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.816249 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.816324 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.819312 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.819624 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:06:53Z\\\",\\\"message\\\":\\\"W1123 00:06:52.540701 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 00:06:52.541170 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763856412 cert, and key in /tmp/serving-cert-3312799841/serving-signer.crt, /tmp/serving-cert-3312799841/serving-signer.key\\\\nI1123 00:06:52.966438 1 observer_polling.go:159] Starting file observer\\\\nW1123 00:06:52.969000 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 00:06:52.969265 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:06:52.970363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3312799841/tls.crt::/tmp/serving-cert-3312799841/tls.key\\\\\\\"\\\\nF1123 00:06:53.383094 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.819723 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.783366 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.767572 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.819954 4743 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.819980 4743 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820000 4743 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820018 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820035 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820051 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820067 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820083 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820116 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820130 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820143 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820154 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820168 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820180 4743 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820194 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820204 4743 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820218 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820494 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.820863 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.821513 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.821857 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.822092 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.822213 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.822374 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.822699 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.823719 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.824040 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.824376 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.824555 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.824920 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.825276 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.825469 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.826831 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.827622 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.827874 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.829601 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.833236 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.833616 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.833630 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.833793 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.834419 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.835632 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.836589 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.837700 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.837952 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.840128 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.840218 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.840347 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.840982 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.840991 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.841104 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.841795 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.843119 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.843149 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.843163 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.844408 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.852003 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.853237 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.858166 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.858746 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.869846 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.870611 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.870789 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.872434 4743 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70" exitCode=255 Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.873104 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70"} Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.873190 4743 scope.go:117] "RemoveContainer" containerID="ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.882769 4743 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.883211 4743 scope.go:117] "RemoveContainer" containerID="cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70" Nov 23 00:07:10 crc kubenswrapper[4743]: E1123 00:07:10.883619 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.888845 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.901318 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.913151 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920663 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920756 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920815 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920828 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920839 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920852 4743 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920863 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920852 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920883 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920873 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920944 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920964 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920978 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.920991 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921004 4743 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921014 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921026 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921039 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921051 4743 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921064 4743 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921076 4743 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921088 4743 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921099 4743 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921111 4743 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921123 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921134 4743 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921145 4743 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921159 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921172 4743 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921183 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921195 4743 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921209 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921220 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921232 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921245 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921258 4743 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921272 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921284 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921297 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921311 4743 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921323 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921335 4743 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921347 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921361 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921374 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921386 4743 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921398 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921412 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921426 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921440 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921453 4743 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921466 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921480 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921517 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921531 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921544 4743 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921556 4743 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921568 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921580 4743 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921593 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921606 4743 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921622 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921639 4743 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921675 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921687 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921699 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921711 4743 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921722 4743 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921733 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921744 4743 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921756 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921768 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921779 4743 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921792 4743 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921806 4743 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921820 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921832 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921846 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921857 4743 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921869 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921883 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921915 4743 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921928 4743 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921943 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921955 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921967 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921979 4743 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.921990 4743 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922003 4743 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922014 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922027 4743 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922040 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922052 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922065 4743 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922078 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922088 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922100 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922114 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922127 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922138 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922152 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922164 4743 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922178 4743 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922189 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922202 4743 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922218 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922230 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922243 4743 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922257 4743 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922269 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922281 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922324 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922339 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922354 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922367 4743 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922380 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922394 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922407 4743 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922418 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922431 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922443 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922456 4743 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922468 4743 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922498 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922512 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922526 4743 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922538 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922553 4743 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922567 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922581 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922593 4743 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922606 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922621 4743 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922634 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922647 4743 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922660 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922672 4743 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922687 4743 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922701 4743 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922716 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922732 4743 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922745 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922759 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922776 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922791 4743 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922806 4743 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922819 4743 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922834 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922849 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922863 4743 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922874 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922883 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922894 4743 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922904 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922913 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922924 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922934 4743 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922944 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922953 4743 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922964 4743 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922974 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922983 4743 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.922992 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923020 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923031 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923040 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923050 4743 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923060 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923071 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923084 4743 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923094 4743 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923104 4743 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923114 4743 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923123 4743 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923133 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923142 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923151 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923160 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923169 4743 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923178 4743 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.923188 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.929035 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:06:53Z\\\",\\\"message\\\":\\\"W1123 00:06:52.540701 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 00:06:52.541170 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763856412 cert, and key in /tmp/serving-cert-3312799841/serving-signer.crt, /tmp/serving-cert-3312799841/serving-signer.key\\\\nI1123 00:06:52.966438 1 observer_polling.go:159] Starting file observer\\\\nW1123 00:06:52.969000 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 00:06:52.969265 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:06:52.970363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3312799841/tls.crt::/tmp/serving-cert-3312799841/tls.key\\\\\\\"\\\\nF1123 00:06:53.383094 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.941932 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.948259 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.956588 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.956977 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.965076 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 00:07:10 crc kubenswrapper[4743]: I1123 00:07:10.981737 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.280089 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kvwqd"] Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.280473 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.282678 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.282814 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.284507 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.297375 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.309523 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.321508 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.328790 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.328870 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.328897 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.328941 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.328966 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca2e6214-c5b2-4734-944c-efbf7e76ad99-hosts-file\") pod \"node-resolver-kvwqd\" (UID: \"ca2e6214-c5b2-4734-944c-efbf7e76ad99\") " pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.329007 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrsbn\" (UniqueName: \"kubernetes.io/projected/ca2e6214-c5b2-4734-944c-efbf7e76ad99-kube-api-access-rrsbn\") pod \"node-resolver-kvwqd\" (UID: \"ca2e6214-c5b2-4734-944c-efbf7e76ad99\") " pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329055 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:07:12.32901276 +0000 UTC m=+24.407110897 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.329135 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329152 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329194 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329276 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:12.329258306 +0000 UTC m=+24.407356433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329197 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329209 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329455 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329419 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:12.32939068 +0000 UTC m=+24.407488987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329596 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:12.329543244 +0000 UTC m=+24.407641371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329678 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329694 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329707 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.329748 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:12.329740119 +0000 UTC m=+24.407838246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.331408 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.351506 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:06:53Z\\\",\\\"message\\\":\\\"W1123 00:06:52.540701 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 00:06:52.541170 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763856412 cert, and key in /tmp/serving-cert-3312799841/serving-signer.crt, /tmp/serving-cert-3312799841/serving-signer.key\\\\nI1123 00:06:52.966438 1 observer_polling.go:159] Starting file observer\\\\nW1123 00:06:52.969000 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 00:06:52.969265 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:06:52.970363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3312799841/tls.crt::/tmp/serving-cert-3312799841/tls.key\\\\\\\"\\\\nF1123 00:06:53.383094 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.363863 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.374427 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.773542 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.773755 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.774217 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca2e6214-c5b2-4734-944c-efbf7e76ad99-hosts-file\") pod \"node-resolver-kvwqd\" (UID: \"ca2e6214-c5b2-4734-944c-efbf7e76ad99\") " pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.774282 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrsbn\" (UniqueName: \"kubernetes.io/projected/ca2e6214-c5b2-4734-944c-efbf7e76ad99-kube-api-access-rrsbn\") pod \"node-resolver-kvwqd\" (UID: \"ca2e6214-c5b2-4734-944c-efbf7e76ad99\") " pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.774669 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca2e6214-c5b2-4734-944c-efbf7e76ad99-hosts-file\") pod \"node-resolver-kvwqd\" (UID: \"ca2e6214-c5b2-4734-944c-efbf7e76ad99\") " pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.827188 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zvknx"] Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.827557 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-s4k55"] Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.827890 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.828441 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.829226 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v64gz"] Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.830338 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.833200 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.834094 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.834140 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cxtxv"] Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.834341 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.834431 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.834656 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.834683 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.835427 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.836811 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.836931 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.837458 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.837888 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.841902 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842049 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842110 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842065 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842194 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842051 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842051 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842086 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842143 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.842589 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.847073 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrsbn\" (UniqueName: \"kubernetes.io/projected/ca2e6214-c5b2-4734-944c-efbf7e76ad99-kube-api-access-rrsbn\") pod \"node-resolver-kvwqd\" (UID: \"ca2e6214-c5b2-4734-944c-efbf7e76ad99\") " pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.856972 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.870429 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.874816 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-netns\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.874883 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-cnibin\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.874920 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-slash\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.874955 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-ovn-kubernetes\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.874992 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvps\" (UniqueName: \"kubernetes.io/projected/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-kube-api-access-srvps\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875023 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-system-cni-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875055 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-cni-bin\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875104 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-hostroot\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875182 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-script-lib\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875229 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-mcd-auth-proxy-config\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875449 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-var-lib-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875559 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-k8s-cni-cncf-io\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875637 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-etc-kubernetes\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875819 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cnibin\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875886 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-daemon-config\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.875924 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-config\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876272 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cni-binary-copy\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876362 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-kubelet\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876403 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-systemd\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876428 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876453 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84h6b\" (UniqueName: \"kubernetes.io/projected/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-kube-api-access-84h6b\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876512 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-systemd-units\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876535 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-etc-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876557 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-log-socket\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876583 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-kubelet\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876609 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-os-release\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876660 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-multus-certs\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876686 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-proxy-tls\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876713 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovn-node-metrics-cert\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876741 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-system-cni-dir\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876766 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-cni-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876792 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-socket-dir-parent\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876814 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-ovn\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876874 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876932 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-conf-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.876980 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-rootfs\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877017 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-netd\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877029 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5"} Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877054 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b0418df6-be6b-459c-8685-770bc9c99a0e-cni-binary-copy\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877089 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d822eed79b49af483d5da68a829e47768568e385097a685642797fd2f4a20e8d"} Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877120 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877162 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877197 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-cni-multus\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877240 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wt7d\" (UniqueName: \"kubernetes.io/projected/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-kube-api-access-5wt7d\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877284 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-netns\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877342 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-os-release\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.877391 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4q4\" (UniqueName: \"kubernetes.io/projected/b0418df6-be6b-459c-8685-770bc9c99a0e-kube-api-access-lx4q4\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.878227 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-node-log\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.878282 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-bin\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.878325 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-env-overrides\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.882747 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.887381 4743 scope.go:117] "RemoveContainer" containerID="cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70" Nov 23 00:07:11 crc kubenswrapper[4743]: E1123 00:07:11.887611 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.887956 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.889023 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"88bc9ddf2720b23085783a5a161d7001ada4ab9f91464f8ad93772ac3ef38931"} Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.891981 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a"} Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.892048 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f"} Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.892072 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7eb33ba5ecd9c000a5336a0fec8570155529abd4953e833d6866d1576d68186"} Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.895152 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kvwqd" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.902995 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.928985 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.945873 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.961739 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.974590 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca4e37d81a441bff642839c92a2dce5cd9e9091c898a18b299bb1560669cc2c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:06:53Z\\\",\\\"message\\\":\\\"W1123 00:06:52.540701 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 00:06:52.541170 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763856412 cert, and key in /tmp/serving-cert-3312799841/serving-signer.crt, /tmp/serving-cert-3312799841/serving-signer.key\\\\nI1123 00:06:52.966438 1 observer_polling.go:159] Starting file observer\\\\nW1123 00:06:52.969000 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 00:06:52.969265 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:06:52.970363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3312799841/tls.crt::/tmp/serving-cert-3312799841/tls.key\\\\\\\"\\\\nF1123 00:06:53.383094 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979661 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-conf-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979721 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-rootfs\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979760 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-ovn\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979785 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979821 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-netd\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979841 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b0418df6-be6b-459c-8685-770bc9c99a0e-cni-binary-copy\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979862 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979886 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-cni-multus\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979885 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-ovn\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979907 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979922 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wt7d\" (UniqueName: \"kubernetes.io/projected/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-kube-api-access-5wt7d\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979940 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-netns\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979929 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-conf-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979992 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-node-log\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979970 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-node-log\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980023 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-bin\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980046 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-bin\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980067 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-env-overrides\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980079 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980095 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-os-release\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980118 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4q4\" (UniqueName: \"kubernetes.io/projected/b0418df6-be6b-459c-8685-770bc9c99a0e-kube-api-access-lx4q4\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980177 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-netns\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980195 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-cnibin\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980229 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-slash\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980249 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-ovn-kubernetes\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980271 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-cni-bin\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980290 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-hostroot\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980340 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvps\" (UniqueName: \"kubernetes.io/projected/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-kube-api-access-srvps\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980357 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-system-cni-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980379 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-mcd-auth-proxy-config\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980411 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-script-lib\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980429 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-k8s-cni-cncf-io\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980449 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-etc-kubernetes\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980501 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-var-lib-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980521 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-daemon-config\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980547 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cnibin\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980564 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-kubelet\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980580 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-systemd\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980598 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980613 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-config\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980629 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cni-binary-copy\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980655 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-etc-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980675 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-log-socket\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980690 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-kubelet\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980715 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84h6b\" (UniqueName: \"kubernetes.io/projected/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-kube-api-access-84h6b\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980740 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-systemd-units\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980758 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-multus-certs\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980788 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-multus-certs\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980793 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-proxy-tls\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980820 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-netns\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980823 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-os-release\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980852 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovn-node-metrics-cert\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980870 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-system-cni-dir\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980888 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-cni-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980905 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-socket-dir-parent\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.980985 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-os-release\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981001 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-socket-dir-parent\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981021 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-netd\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981049 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-cnibin\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.979950 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-rootfs\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981300 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-slash\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981327 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-ovn-kubernetes\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981350 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-cni-bin\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981372 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-hostroot\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981630 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-env-overrides\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981650 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b0418df6-be6b-459c-8685-770bc9c99a0e-cni-binary-copy\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981691 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-os-release\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.981738 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-system-cni-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982212 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982252 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-cni-multus\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982398 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-mcd-auth-proxy-config\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982782 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982805 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982836 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-k8s-cni-cncf-io\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982873 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-etc-kubernetes\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.982894 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-script-lib\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983409 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-config\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983459 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-systemd-units\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983459 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-var-lib-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983817 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cni-binary-copy\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983862 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-etc-openvswitch\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983892 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-log-socket\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983922 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-var-lib-kubelet\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.983952 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-system-cni-dir\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.984080 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-daemon-config\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.984114 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-cnibin\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.984143 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-kubelet\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.984172 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-systemd\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.984231 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-multus-cni-dir\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.984612 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0418df6-be6b-459c-8685-770bc9c99a0e-host-run-netns\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.986742 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.988588 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovn-node-metrics-cert\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:11 crc kubenswrapper[4743]: I1123 00:07:11.989030 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-proxy-tls\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.004039 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvps\" (UniqueName: \"kubernetes.io/projected/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-kube-api-access-srvps\") pod \"ovnkube-node-v64gz\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.004736 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wt7d\" (UniqueName: \"kubernetes.io/projected/ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3-kube-api-access-5wt7d\") pod \"multus-additional-cni-plugins-s4k55\" (UID: \"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\") " pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.006006 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84h6b\" (UniqueName: \"kubernetes.io/projected/dbda6ee4-c567-4104-9c7a-ca01c6f9d989-kube-api-access-84h6b\") pod \"machine-config-daemon-cxtxv\" (UID: \"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\") " pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.008922 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.012124 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4q4\" (UniqueName: \"kubernetes.io/projected/b0418df6-be6b-459c-8685-770bc9c99a0e-kube-api-access-lx4q4\") pod \"multus-zvknx\" (UID: \"b0418df6-be6b-459c-8685-770bc9c99a0e\") " pod="openshift-multus/multus-zvknx" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.020678 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.043517 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.058738 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.070476 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.085817 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.100699 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.120067 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.132328 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.143102 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.145319 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zvknx" Nov 23 00:07:12 crc kubenswrapper[4743]: W1123 00:07:12.158188 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0418df6_be6b_459c_8685_770bc9c99a0e.slice/crio-c20c95cf194e009a087fb62ddc0725ce16dc12b35f44ac854f6cd5ffa46d61e7 WatchSource:0}: Error finding container c20c95cf194e009a087fb62ddc0725ce16dc12b35f44ac854f6cd5ffa46d61e7: Status 404 returned error can't find the container with id c20c95cf194e009a087fb62ddc0725ce16dc12b35f44ac854f6cd5ffa46d61e7 Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.160318 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.163847 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s4k55" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.171913 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.174279 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.185092 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.188707 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.200450 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.225575 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.387830 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388076 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:07:14.388035455 +0000 UTC m=+26.466133582 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.388240 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.388276 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.388299 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.388323 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388514 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388529 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388541 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388588 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:14.388571299 +0000 UTC m=+26.466669426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388885 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388897 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388905 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388927 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:14.388920449 +0000 UTC m=+26.467018576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388969 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.388989 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:14.388983651 +0000 UTC m=+26.467081778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.389014 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.389033 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:14.389028022 +0000 UTC m=+26.467126149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.721747 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.721956 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.722192 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:12 crc kubenswrapper[4743]: E1123 00:07:12.722378 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.727850 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.729032 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.730162 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.731184 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.732054 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.732941 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.734048 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.734837 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.735829 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.736577 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.737278 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.739960 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.740665 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.741842 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.742663 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.743722 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.744329 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.744788 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.745936 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.746576 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.747574 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.748227 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.748713 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.749846 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.750317 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.751363 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.752016 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.753009 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.753784 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.754820 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.755299 4743 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.755404 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.757259 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.758401 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.758843 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.760387 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.761453 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.762402 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.763546 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.764591 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.765464 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.766150 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.767268 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.767914 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.768779 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.769386 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.770694 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.771626 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.772569 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.773046 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.773707 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.774822 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.775469 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.776651 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.898119 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" exitCode=0 Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.898160 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.898230 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"917e766fedb18fb0ed58a31f0f8be6095ccd2e66cc25cd909c4f7d6513b6159e"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.901039 4743 generic.go:334] "Generic (PLEG): container finished" podID="ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3" containerID="18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5" exitCode=0 Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.901118 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerDied","Data":"18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.901173 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerStarted","Data":"061fe733047c482b8c74d5a1bd08e3646d39d7088fd996052142a858d544e753"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.903319 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kvwqd" event={"ID":"ca2e6214-c5b2-4734-944c-efbf7e76ad99","Type":"ContainerStarted","Data":"4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.903356 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kvwqd" event={"ID":"ca2e6214-c5b2-4734-944c-efbf7e76ad99","Type":"ContainerStarted","Data":"3524d9b45ba64e21b24600a167124f2304880e1388da0896dfcf195b16b5c9d5"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.905081 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerStarted","Data":"c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.905108 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerStarted","Data":"c20c95cf194e009a087fb62ddc0725ce16dc12b35f44ac854f6cd5ffa46d61e7"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.907699 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.907825 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.907909 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"e19752a7817895f1e028f48ab8f3e8124ccdd79b97d94e97e18115224ed2af38"} Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.924528 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.947522 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:12 crc kubenswrapper[4743]: I1123 00:07:12.979025 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.008773 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.056271 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.081009 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.095556 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.108901 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.122812 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.137319 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.149005 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.162633 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.187624 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.206449 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.223817 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.238429 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.253868 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.271916 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.288118 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.306866 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.321405 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.340043 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.359788 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.373830 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.721821 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:13 crc kubenswrapper[4743]: E1123 00:07:13.721999 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.738989 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vwqq6"] Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.739518 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.746001 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.746647 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.747193 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.747521 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.774213 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.793284 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.805281 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzld\" (UniqueName: \"kubernetes.io/projected/15024711-2e2a-406c-b47f-19b3dabc6202-kube-api-access-qwzld\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.805335 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15024711-2e2a-406c-b47f-19b3dabc6202-host\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.805366 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15024711-2e2a-406c-b47f-19b3dabc6202-serviceca\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.806178 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.825851 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.842150 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.858472 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.876141 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.889746 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.902951 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.906123 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15024711-2e2a-406c-b47f-19b3dabc6202-host\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.906170 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15024711-2e2a-406c-b47f-19b3dabc6202-serviceca\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.906216 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzld\" (UniqueName: \"kubernetes.io/projected/15024711-2e2a-406c-b47f-19b3dabc6202-kube-api-access-qwzld\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.906296 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15024711-2e2a-406c-b47f-19b3dabc6202-host\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.907256 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15024711-2e2a-406c-b47f-19b3dabc6202-serviceca\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.913357 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.917608 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.917660 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.917675 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.917689 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.917702 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.917714 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.919781 4743 generic.go:334] "Generic (PLEG): container finished" podID="ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3" containerID="b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf" exitCode=0 Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.919826 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerDied","Data":"b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf"} Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.925389 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.930516 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzld\" (UniqueName: \"kubernetes.io/projected/15024711-2e2a-406c-b47f-19b3dabc6202-kube-api-access-qwzld\") pod \"node-ca-vwqq6\" (UID: \"15024711-2e2a-406c-b47f-19b3dabc6202\") " pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.946147 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.966442 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:13 crc kubenswrapper[4743]: I1123 00:07:13.983054 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:13Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.018986 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.037762 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.061141 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.092962 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.104458 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.129310 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.144741 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.148423 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.153506 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.159686 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.160138 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.174314 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.188354 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.202361 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.218260 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.229633 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.245285 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.257106 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.270129 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.283272 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.291243 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vwqq6" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.302840 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: W1123 00:07:14.305477 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15024711_2e2a_406c_b47f_19b3dabc6202.slice/crio-69177baae838a118d787c8ccdda6e76b65f23b5c5d7e08ae051c25a75fd14364 WatchSource:0}: Error finding container 69177baae838a118d787c8ccdda6e76b65f23b5c5d7e08ae051c25a75fd14364: Status 404 returned error can't find the container with id 69177baae838a118d787c8ccdda6e76b65f23b5c5d7e08ae051c25a75fd14364 Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.318642 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.331966 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.349582 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.370115 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.388353 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.401451 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.411224 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.411383 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411402 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:07:18.41137713 +0000 UTC m=+30.489475257 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.411450 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.411539 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.411570 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411588 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411620 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411633 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411689 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411699 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411731 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411691 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:18.411673928 +0000 UTC m=+30.489772045 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411749 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411763 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411772 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:18.41175456 +0000 UTC m=+30.489852687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411796 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:18.411788131 +0000 UTC m=+30.489886258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.411815 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:18.411807091 +0000 UTC m=+30.489905218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.414678 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.426993 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.444373 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.722119 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.722215 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.722328 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:14 crc kubenswrapper[4743]: E1123 00:07:14.722412 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.930645 4743 generic.go:334] "Generic (PLEG): container finished" podID="ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3" containerID="1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8" exitCode=0 Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.930728 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerDied","Data":"1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8"} Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.932370 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vwqq6" event={"ID":"15024711-2e2a-406c-b47f-19b3dabc6202","Type":"ContainerStarted","Data":"e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981"} Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.932408 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vwqq6" event={"ID":"15024711-2e2a-406c-b47f-19b3dabc6202","Type":"ContainerStarted","Data":"69177baae838a118d787c8ccdda6e76b65f23b5c5d7e08ae051c25a75fd14364"} Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.947181 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.966465 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.981566 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:14 crc kubenswrapper[4743]: I1123 00:07:14.995987 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.008726 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.020456 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.034748 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.047049 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.063473 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.078792 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.091862 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.101136 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.122225 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.137469 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.154614 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.168029 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.185264 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.197843 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.207860 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.217361 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.235080 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.247359 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.258969 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.271814 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.286859 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.300383 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.313395 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.324705 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.721876 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:15 crc kubenswrapper[4743]: E1123 00:07:15.722194 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.944871 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca"} Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.948749 4743 generic.go:334] "Generic (PLEG): container finished" podID="ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3" containerID="66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d" exitCode=0 Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.948822 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerDied","Data":"66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d"} Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.969181 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:15 crc kubenswrapper[4743]: I1123 00:07:15.986684 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.003298 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.022386 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.037867 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.050264 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.068361 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.082970 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.102674 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.117143 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.132142 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.152022 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.167108 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.177572 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.206600 4743 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.208780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.208815 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.208826 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.209000 4743 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.215533 4743 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.215868 4743 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.217019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.217052 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.217063 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.217081 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.217093 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.232109 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.236904 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.236941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.236951 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.236968 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.236981 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.248810 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.252698 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.252751 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.252762 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.252783 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.252796 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.266615 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.270742 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.270812 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.270824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.270842 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.270857 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.283549 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.287415 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.287560 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.287626 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.287706 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.287799 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.300411 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.300620 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.302818 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.302862 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.302878 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.302902 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.302918 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.406230 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.406288 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.406300 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.406325 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.406341 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.508865 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.508925 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.508944 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.508969 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.508986 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.611586 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.611912 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.612032 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.612135 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.612216 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.714551 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.714928 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.715019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.715136 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.715246 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.722260 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.722463 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.723933 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:16 crc kubenswrapper[4743]: E1123 00:07:16.724415 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.819540 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.819610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.819624 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.819648 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.819662 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.923197 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.923266 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.923287 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.923315 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.923333 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:16Z","lastTransitionTime":"2025-11-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.956113 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerStarted","Data":"7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee"} Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.969539 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.980702 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:16 crc kubenswrapper[4743]: I1123 00:07:16.996775 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.017528 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.029066 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.029124 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.029135 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.029171 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.029185 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.033358 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.064553 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.092690 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.116008 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.131293 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.131332 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.131346 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.131389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.131403 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.133175 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.157743 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.187222 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.206066 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.225746 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.235522 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.235602 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.235622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.235652 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.235672 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.244197 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.338720 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.338794 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.338815 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.338849 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.338870 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.442279 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.442397 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.442425 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.442452 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.442474 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.545430 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.545547 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.545567 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.545598 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.545619 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.649602 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.649660 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.649677 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.649821 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.649849 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.721823 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:17 crc kubenswrapper[4743]: E1123 00:07:17.722086 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.753955 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.754055 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.754082 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.754117 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.754146 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.858072 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.858152 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.858170 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.858206 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.858226 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.961323 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.961397 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.961418 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.961445 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:17 crc kubenswrapper[4743]: I1123 00:07:17.961465 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:17Z","lastTransitionTime":"2025-11-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.047369 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.048616 4743 scope.go:117] "RemoveContainer" containerID="cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70" Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.048909 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.065337 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.065410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.065437 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.065471 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.065542 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.168132 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.168213 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.168235 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.168263 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.168286 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.271833 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.271901 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.271920 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.271953 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.271973 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.374978 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.375033 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.375051 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.375102 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.375121 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.457054 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.457254 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.457329 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.457398 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457462 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:07:26.45742141 +0000 UTC m=+38.535519577 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.457552 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457588 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457604 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457621 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457656 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457689 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:26.457660877 +0000 UTC m=+38.535759214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457692 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457743 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:26.457715158 +0000 UTC m=+38.535813325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457792 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457820 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457845 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457798 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:26.45776467 +0000 UTC m=+38.535862837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.457911 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:26.457894823 +0000 UTC m=+38.535992980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.479746 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.479819 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.479841 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.479877 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.479904 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.584894 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.585480 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.585544 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.585574 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.585593 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.689595 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.689648 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.689667 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.689702 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.689719 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.722934 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.722991 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.724613 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:18 crc kubenswrapper[4743]: E1123 00:07:18.724845 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.736586 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.752430 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.765655 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.783288 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.792724 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.792899 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.792966 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.793035 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.793117 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.800176 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.814529 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.831985 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.848228 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.905924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.905967 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.905976 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.906038 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.906058 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:18Z","lastTransitionTime":"2025-11-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.910825 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.924200 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.940897 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.955873 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.970566 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.970985 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.971005 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.971119 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.977881 4743 generic.go:334] "Generic (PLEG): container finished" podID="ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3" containerID="7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee" exitCode=0 Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.977932 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerDied","Data":"7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee"} Nov 23 00:07:18 crc kubenswrapper[4743]: I1123 00:07:18.995761 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.009470 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.009577 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.009600 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.009631 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.009652 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.019082 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.031744 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.032903 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.038197 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.054797 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.080761 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.104274 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.114998 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.115088 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.115118 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.115155 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.115183 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.119537 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.136265 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.149646 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.172037 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.187374 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.203150 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.217454 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.217519 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.217539 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.217561 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.217573 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.223435 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.236983 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.252763 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.270835 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.286461 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.298289 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.320522 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.320599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.320617 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.320977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.321013 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.322532 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.343999 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.361465 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.383201 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.399513 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.413404 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.424467 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.424524 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.424538 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.424557 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.424568 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.426662 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.440531 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.456642 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.475742 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.490596 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.527572 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.527614 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.527630 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.527677 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.527697 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.631244 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.631281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.631294 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.631312 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.631328 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.721616 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:19 crc kubenswrapper[4743]: E1123 00:07:19.721829 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.734528 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.734579 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.734590 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.734610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.734623 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.837509 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.838254 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.838295 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.838331 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.838355 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.941913 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.941960 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.941970 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.941987 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.941999 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:19Z","lastTransitionTime":"2025-11-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.986783 4743 generic.go:334] "Generic (PLEG): container finished" podID="ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3" containerID="16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8" exitCode=0 Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.986878 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerDied","Data":"16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8"} Nov 23 00:07:19 crc kubenswrapper[4743]: I1123 00:07:19.986986 4743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.010130 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.025942 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.041426 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.046573 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.046627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.046645 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.046752 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.046794 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.057688 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.070831 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.086913 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.113107 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.130774 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.146627 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.152010 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.152056 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.152071 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.152092 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.152106 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.160613 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.173437 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.193312 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.208002 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.219912 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:20Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.254367 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.254419 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.254437 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.254460 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.254472 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.356863 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.356915 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.356926 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.356949 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.356965 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.459763 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.459822 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.459836 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.459856 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.459871 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.563295 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.563355 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.563368 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.563396 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.563413 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.666822 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.666938 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.666966 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.666996 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.667018 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.722019 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.722093 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:20 crc kubenswrapper[4743]: E1123 00:07:20.722290 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:20 crc kubenswrapper[4743]: E1123 00:07:20.722420 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.770648 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.770753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.770781 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.770865 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.770962 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.875052 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.875119 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.875425 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.875596 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.875617 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.980139 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.980407 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.980426 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.980450 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:20 crc kubenswrapper[4743]: I1123 00:07:20.980465 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:20Z","lastTransitionTime":"2025-11-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.004005 4743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.004364 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" event={"ID":"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3","Type":"ContainerStarted","Data":"14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.033178 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.058425 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.074854 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.083473 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.083559 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.083574 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.083593 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.083607 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.092121 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.110092 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.124578 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.137226 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.158820 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.178508 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.186515 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.186573 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.186587 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.186610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.186624 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.195327 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.219631 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.239047 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.253595 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.265523 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:21Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.289846 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.289894 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.289906 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.289926 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.289939 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.392573 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.392661 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.392685 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.392721 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.392746 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.495853 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.495910 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.495924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.495949 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.495967 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.598721 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.598763 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.598772 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.598790 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.598802 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.701390 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.701439 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.701450 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.701473 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.701507 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.722095 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:21 crc kubenswrapper[4743]: E1123 00:07:21.722271 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.805233 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.805297 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.805309 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.805333 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.805345 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.909359 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.909456 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.909478 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.909546 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:21 crc kubenswrapper[4743]: I1123 00:07:21.909570 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:21Z","lastTransitionTime":"2025-11-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.012154 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.012230 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.012264 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.012301 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.012325 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.115426 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.115535 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.115557 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.115585 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.115605 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.218644 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.218707 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.218732 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.218769 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.218793 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.321857 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.321921 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.321935 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.321958 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.321972 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.425379 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.425435 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.425450 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.425529 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.425551 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.528532 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.528601 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.528612 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.528638 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.528651 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.632961 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.633027 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.633042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.633064 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.633086 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.723713 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.723905 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:22 crc kubenswrapper[4743]: E1123 00:07:22.724074 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:22 crc kubenswrapper[4743]: E1123 00:07:22.724303 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.737653 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.737715 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.737736 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.737764 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.737784 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.841293 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.841365 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.841384 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.841411 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.841430 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.944367 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.944422 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.944433 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.944453 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:22 crc kubenswrapper[4743]: I1123 00:07:22.944465 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:22Z","lastTransitionTime":"2025-11-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.047940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.047983 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.047992 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.048034 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.048048 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.152302 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.152354 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.152367 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.152389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.152404 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.255969 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.256045 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.256070 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.256103 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.256128 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.360262 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.360315 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.360330 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.360354 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.360368 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.463506 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.463551 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.463563 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.463583 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.463597 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.566602 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.566649 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.566668 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.566691 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.566708 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.669865 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.669911 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.669924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.669941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.669953 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.721794 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:23 crc kubenswrapper[4743]: E1123 00:07:23.721924 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.773263 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.773319 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.773333 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.773351 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.773364 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.876513 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.876598 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.876617 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.876648 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.876668 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.980019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.980091 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.980154 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.980188 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:23 crc kubenswrapper[4743]: I1123 00:07:23.980211 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:23Z","lastTransitionTime":"2025-11-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.019772 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/0.log" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.024325 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36" exitCode=1 Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.024372 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.025889 4743 scope.go:117] "RemoveContainer" containerID="56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.049588 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.071450 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.084215 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.084288 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.084309 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.084340 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.084360 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.100277 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.116992 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.134765 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.150540 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.187827 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.188195 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.188359 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.188522 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.188650 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.192415 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.211454 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.230044 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.242791 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.264022 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.288568 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.292593 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.292691 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.292720 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.292759 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.292790 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.311940 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.330441 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:24Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.396922 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.397527 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.397556 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.397591 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.397612 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.500739 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.501206 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.501423 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.501630 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.501769 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.605114 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.605159 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.605172 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.605192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.605208 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.709328 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.709412 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.709432 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.709464 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.709524 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.721813 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:24 crc kubenswrapper[4743]: E1123 00:07:24.722015 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.722532 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:24 crc kubenswrapper[4743]: E1123 00:07:24.722650 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.812682 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.812741 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.812751 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.812778 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.812791 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.916105 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.916171 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.916195 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.916222 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:24 crc kubenswrapper[4743]: I1123 00:07:24.916244 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:24Z","lastTransitionTime":"2025-11-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.019308 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.019366 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.019391 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.019420 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.019440 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.123637 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.123717 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.123741 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.123778 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.123805 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.227183 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.227627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.227733 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.227824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.227910 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.331575 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.331644 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.331665 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.331690 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.331710 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.415258 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.434682 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.434758 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.434778 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.434812 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.434831 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.516620 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t"] Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.517199 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.522140 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.522205 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.539737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.539836 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.539952 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.539998 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.540085 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.543057 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.564201 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.580165 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.595332 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.610128 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.620708 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.639740 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2354a4bd-98b1-489f-a4dc-562d4ce123ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.640155 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2354a4bd-98b1-489f-a4dc-562d4ce123ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.640424 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt59c\" (UniqueName: \"kubernetes.io/projected/2354a4bd-98b1-489f-a4dc-562d4ce123ba-kube-api-access-lt59c\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.640639 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2354a4bd-98b1-489f-a4dc-562d4ce123ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.643773 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.643854 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.643875 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.643902 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.643925 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.653400 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.674363 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.688439 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.705584 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.721962 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:25 crc kubenswrapper[4743]: E1123 00:07:25.722460 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.728164 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.741620 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2354a4bd-98b1-489f-a4dc-562d4ce123ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.741693 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2354a4bd-98b1-489f-a4dc-562d4ce123ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.741720 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2354a4bd-98b1-489f-a4dc-562d4ce123ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.741783 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt59c\" (UniqueName: \"kubernetes.io/projected/2354a4bd-98b1-489f-a4dc-562d4ce123ba-kube-api-access-lt59c\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.742470 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2354a4bd-98b1-489f-a4dc-562d4ce123ba-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.742679 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2354a4bd-98b1-489f-a4dc-562d4ce123ba-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.747570 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.747920 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.748015 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.748114 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.748206 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.748358 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2354a4bd-98b1-489f-a4dc-562d4ce123ba-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.751962 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.761853 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt59c\" (UniqueName: \"kubernetes.io/projected/2354a4bd-98b1-489f-a4dc-562d4ce123ba-kube-api-access-lt59c\") pod \"ovnkube-control-plane-749d76644c-q8h4t\" (UID: \"2354a4bd-98b1-489f-a4dc-562d4ce123ba\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.769820 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.782392 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.797928 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.839666 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.851337 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.851434 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.851455 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.851501 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.851521 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:25 crc kubenswrapper[4743]: W1123 00:07:25.851997 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2354a4bd_98b1_489f_a4dc_562d4ce123ba.slice/crio-018c02ca96ab845735bf667c516602e6b05fe89affe58ffcc47beaf64cb3f1ed WatchSource:0}: Error finding container 018c02ca96ab845735bf667c516602e6b05fe89affe58ffcc47beaf64cb3f1ed: Status 404 returned error can't find the container with id 018c02ca96ab845735bf667c516602e6b05fe89affe58ffcc47beaf64cb3f1ed Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.954331 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.954405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.954426 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.954460 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:25 crc kubenswrapper[4743]: I1123 00:07:25.954518 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:25Z","lastTransitionTime":"2025-11-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.033935 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" event={"ID":"2354a4bd-98b1-489f-a4dc-562d4ce123ba","Type":"ContainerStarted","Data":"018c02ca96ab845735bf667c516602e6b05fe89affe58ffcc47beaf64cb3f1ed"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.057524 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.057630 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.057660 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.057696 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.057721 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.161410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.161474 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.161498 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.161515 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.161525 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.265032 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.265089 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.265098 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.265125 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.265259 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.369225 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.369264 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.369275 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.369291 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.369303 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.472420 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.472458 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.472467 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.472497 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.472509 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.547138 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.547202 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.547211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.547230 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.547242 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.550585 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.550858 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551011 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.551029 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551109 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:07:42.551056476 +0000 UTC m=+54.629154653 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551169 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:42.551153079 +0000 UTC m=+54.629251246 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.551235 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.551301 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551472 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551558 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551600 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551627 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551578 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:42.551555049 +0000 UTC m=+54.629653176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551734 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:42.551711643 +0000 UTC m=+54.629809810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551855 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.551964 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.552042 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.552193 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:07:42.552169924 +0000 UTC m=+54.630268061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.563503 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.569200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.569250 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.569265 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.569288 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.569301 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.585647 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.592238 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.592311 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.592329 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.592357 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.592376 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.614690 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.626208 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.626281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.626296 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.626316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.626358 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.642005 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.642383 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-t8ddf"] Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.643415 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.643551 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.647242 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.647306 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.647318 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.647343 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.647360 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.658858 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.669601 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.669770 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.672439 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.672527 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.672541 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.672564 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.672579 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.675906 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.691849 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.713181 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.721864 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.722018 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.722176 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.722505 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.753410 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.753718 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbqv\" (UniqueName: \"kubernetes.io/projected/24ea31d8-fd1d-4396-9b78-3058666d315a-kube-api-access-bcbqv\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.754000 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.775140 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.775205 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.775218 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.775241 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.775258 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.791506 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.810592 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.830824 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.853405 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.855005 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.855147 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcbqv\" (UniqueName: \"kubernetes.io/projected/24ea31d8-fd1d-4396-9b78-3058666d315a-kube-api-access-bcbqv\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.855216 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: E1123 00:07:26.855514 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:07:27.355456061 +0000 UTC m=+39.433554198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.874599 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.877439 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.877471 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.877498 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.877518 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.877528 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.877897 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcbqv\" (UniqueName: \"kubernetes.io/projected/24ea31d8-fd1d-4396-9b78-3058666d315a-kube-api-access-bcbqv\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.899554 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.915936 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.941646 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.964739 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.980572 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.980630 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.980643 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.980664 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.980677 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:26Z","lastTransitionTime":"2025-11-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.984673 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:26 crc kubenswrapper[4743]: I1123 00:07:26.998433 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.040889 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/0.log" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.045090 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.045780 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.046830 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" event={"ID":"2354a4bd-98b1-489f-a4dc-562d4ce123ba","Type":"ContainerStarted","Data":"7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.065138 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.080739 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.083231 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.083301 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.083319 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.083344 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.083361 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.098241 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.116630 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.137900 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.151337 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.167418 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.186301 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.186802 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.187079 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.187192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.187350 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.187480 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.202379 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.216929 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.240059 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.258640 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.276679 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.290417 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.290501 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.290517 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.290565 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.290579 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.290773 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.324352 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.343509 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.360382 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:27 crc kubenswrapper[4743]: E1123 00:07:27.360727 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:27 crc kubenswrapper[4743]: E1123 00:07:27.360856 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:07:28.360816677 +0000 UTC m=+40.438914854 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.394330 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.394410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.394436 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.394469 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.394551 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.498099 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.498157 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.498175 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.498204 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.498225 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.601630 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.601701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.601719 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.601745 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.601763 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.705107 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.705152 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.705165 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.705188 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.705203 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.722223 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:27 crc kubenswrapper[4743]: E1123 00:07:27.722524 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.810906 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.812145 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.812161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.812191 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.812205 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.915804 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.915866 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.915883 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.915907 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:27 crc kubenswrapper[4743]: I1123 00:07:27.915923 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:27Z","lastTransitionTime":"2025-11-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.019777 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.019848 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.019866 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.019890 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.019907 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.055310 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/1.log" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.056149 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/0.log" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.061089 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986" exitCode=1 Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.061190 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.061289 4743 scope.go:117] "RemoveContainer" containerID="56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.062266 4743 scope.go:117] "RemoveContainer" containerID="7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986" Nov 23 00:07:28 crc kubenswrapper[4743]: E1123 00:07:28.062519 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.087060 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.108723 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.123569 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.123607 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.123619 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.123641 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.123655 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.127125 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.152570 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.173378 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.189917 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.205354 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.223327 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.226900 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.226968 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.226988 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.227015 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.227033 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.250666 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.268233 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.303047 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.325842 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.330610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.330664 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.330679 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.330703 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.330719 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.343004 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.361742 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.375747 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:28 crc kubenswrapper[4743]: E1123 00:07:28.375960 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:28 crc kubenswrapper[4743]: E1123 00:07:28.376057 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:07:30.37603112 +0000 UTC m=+42.454129257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.380606 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.403186 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.433472 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.433525 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.433536 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.433554 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.433566 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.536598 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.536658 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.536676 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.536697 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.536755 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.640287 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.640339 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.640352 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.640376 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.640390 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.722199 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.722326 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:28 crc kubenswrapper[4743]: E1123 00:07:28.722414 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.722339 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:28 crc kubenswrapper[4743]: E1123 00:07:28.722558 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:28 crc kubenswrapper[4743]: E1123 00:07:28.722666 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.741439 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.743744 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.743807 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.743833 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.743871 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.743903 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.755794 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.768176 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.780856 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.794759 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.809394 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.823867 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.846571 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.846624 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.846644 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.846670 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.846689 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.851107 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.866588 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.884682 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.900546 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.916117 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.928832 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.942799 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.949183 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.949227 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.949241 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.949267 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.949281 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:28Z","lastTransitionTime":"2025-11-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.952953 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:28 crc kubenswrapper[4743]: I1123 00:07:28.963228 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:28Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.052237 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.052304 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.052325 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.052351 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.052369 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.066421 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" event={"ID":"2354a4bd-98b1-489f-a4dc-562d4ce123ba","Type":"ContainerStarted","Data":"ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.074153 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/1.log" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.084624 4743 scope.go:117] "RemoveContainer" containerID="7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986" Nov 23 00:07:29 crc kubenswrapper[4743]: E1123 00:07:29.085326 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.088066 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.102614 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.118688 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.131102 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.150784 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.156915 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.156964 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.156977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.156997 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.157010 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.168049 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.179474 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.197078 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.213569 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.229701 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.243927 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.259023 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.260064 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.260147 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.260173 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.260207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.260231 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.275057 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.292260 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.305787 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.326993 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56d03e50e554ccd166e4a9596f882a563b18ba56dbfb53543f975305d1fa6e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:23Z\\\",\\\"message\\\":\\\"ector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1123 00:07:23.148308 5995 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 00:07:23.149624 5995 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 00:07:23.149695 5995 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 00:07:23.149705 5995 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 00:07:23.149777 5995 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 00:07:23.149801 5995 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 00:07:23.149818 5995 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 00:07:23.149829 5995 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 00:07:23.149849 5995 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 00:07:23.149865 5995 factory.go:656] Stopping watch factory\\\\nI1123 00:07:23.149961 5995 ovnkube.go:599] Stopped ovnkube\\\\nI1123 00:07:23.149866 5995 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 00:07:23.149878 5995 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.342266 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.354331 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.363123 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.363164 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.363173 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.363190 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.363201 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.374763 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.392576 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.407999 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.423938 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.437031 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.449385 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.460829 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.465822 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.465885 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.465898 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.465922 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.465936 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.476038 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.489226 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.500670 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.510586 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.539310 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.556055 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.568473 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.568518 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.568529 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.568547 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.568560 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.573935 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:29Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.672274 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.672359 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.672387 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.672423 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.672448 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.721577 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:29 crc kubenswrapper[4743]: E1123 00:07:29.721807 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.776599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.776693 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.776712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.776741 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.776767 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.880816 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.880888 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.880904 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.880929 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.880944 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.984385 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.984455 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.984504 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.984535 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:29 crc kubenswrapper[4743]: I1123 00:07:29.984556 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:29Z","lastTransitionTime":"2025-11-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.087779 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.087849 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.087866 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.087896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.087916 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.192059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.192120 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.192132 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.192155 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.192170 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.295846 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.295917 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.295941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.295977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.296008 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.398718 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:30 crc kubenswrapper[4743]: E1123 00:07:30.398913 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:30 crc kubenswrapper[4743]: E1123 00:07:30.399003 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:07:34.398977309 +0000 UTC m=+46.477075446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.400211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.400251 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.400269 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.400289 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.400302 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.503898 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.503990 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.504018 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.504052 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.504077 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.608133 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.608213 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.608234 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.608262 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.608281 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.711970 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.712030 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.712044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.712066 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.712080 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.722335 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.722345 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.722418 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:30 crc kubenswrapper[4743]: E1123 00:07:30.722827 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:30 crc kubenswrapper[4743]: E1123 00:07:30.722979 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:30 crc kubenswrapper[4743]: E1123 00:07:30.723189 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.723245 4743 scope.go:117] "RemoveContainer" containerID="cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.815056 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.815130 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.815150 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.815179 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.815199 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.917646 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.917701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.917713 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.917756 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:30 crc kubenswrapper[4743]: I1123 00:07:30.917774 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:30Z","lastTransitionTime":"2025-11-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.028743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.028809 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.028818 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.028835 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.028846 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.094926 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.098080 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.098660 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.123508 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.131169 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.131196 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.131206 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.131221 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.131232 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.146428 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.167056 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.185879 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.205924 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.229606 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.234317 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.234374 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.234393 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.234425 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.234447 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.246668 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.267619 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.287280 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.305462 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.322131 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.336130 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.337120 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.337186 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.337205 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.337235 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.337255 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.359148 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.377291 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.393787 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.409874 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:31Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.440775 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.440850 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.440874 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.440909 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.440928 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.544672 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.544783 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.544809 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.544931 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.544955 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.648604 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.648684 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.648708 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.648740 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.648760 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.721580 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:31 crc kubenswrapper[4743]: E1123 00:07:31.721827 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.751676 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.751733 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.751746 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.751771 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.751787 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.863287 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.863328 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.863337 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.863354 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.863366 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.966747 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.966802 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.966811 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.966827 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:31 crc kubenswrapper[4743]: I1123 00:07:31.966840 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:31Z","lastTransitionTime":"2025-11-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.070093 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.070145 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.070158 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.070177 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.070192 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.174104 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.174188 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.174205 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.174234 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.174256 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.278413 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.278456 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.278467 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.278509 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.278520 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.380992 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.381039 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.381052 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.381071 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.381084 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.484878 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.484933 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.484953 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.484981 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.485006 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.588583 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.588666 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.588685 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.588716 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.588739 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.692090 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.692164 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.692182 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.692211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.692230 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.721557 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.721660 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.721748 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:32 crc kubenswrapper[4743]: E1123 00:07:32.721988 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:32 crc kubenswrapper[4743]: E1123 00:07:32.722186 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:32 crc kubenswrapper[4743]: E1123 00:07:32.722388 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.795106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.795190 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.795210 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.795239 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.795258 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.899339 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.899436 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.899458 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.899538 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:32 crc kubenswrapper[4743]: I1123 00:07:32.899565 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:32Z","lastTransitionTime":"2025-11-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.003463 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.003592 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.003622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.003661 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.003688 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.106406 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.106475 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.106524 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.106553 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.106574 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.210932 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.211022 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.211044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.211075 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.211099 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.314800 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.314893 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.314919 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.314953 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.314975 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.418574 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.418655 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.418679 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.418723 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.418749 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.522782 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.522849 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.522865 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.522894 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.522913 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.626552 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.626601 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.626615 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.626635 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.626652 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.721671 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:33 crc kubenswrapper[4743]: E1123 00:07:33.721860 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.729655 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.729743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.729761 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.729792 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.729815 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.833034 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.833089 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.833109 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.833136 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.833153 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.937039 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.937115 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.937133 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.937161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:33 crc kubenswrapper[4743]: I1123 00:07:33.937179 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:33Z","lastTransitionTime":"2025-11-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.040761 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.041177 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.041194 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.041221 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.041241 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.145182 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.145269 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.145291 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.145326 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.145351 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.249328 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.249420 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.249440 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.249470 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.249522 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.353057 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.353125 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.353142 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.353167 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.353188 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.447697 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:34 crc kubenswrapper[4743]: E1123 00:07:34.447955 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:34 crc kubenswrapper[4743]: E1123 00:07:34.448115 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:07:42.448082867 +0000 UTC m=+54.526181074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.456792 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.456850 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.456869 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.456898 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.456917 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.560470 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.560587 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.560606 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.560633 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.560654 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.665092 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.665180 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.665206 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.665232 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.665254 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.721428 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.721595 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.721659 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:34 crc kubenswrapper[4743]: E1123 00:07:34.721739 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:34 crc kubenswrapper[4743]: E1123 00:07:34.721939 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:34 crc kubenswrapper[4743]: E1123 00:07:34.722084 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.768234 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.768285 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.768299 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.768326 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.768346 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.872038 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.872085 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.872097 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.872118 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.872134 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.975992 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.976057 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.976069 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.976093 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:34 crc kubenswrapper[4743]: I1123 00:07:34.976108 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:34Z","lastTransitionTime":"2025-11-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.079077 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.079114 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.079123 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.079138 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.079151 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.182542 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.182612 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.182630 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.182659 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.182679 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.289684 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.289801 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.289824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.289848 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.289868 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.393101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.393153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.393161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.393185 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.393196 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.496900 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.496951 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.496966 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.496989 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.497004 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.599731 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.599798 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.599810 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.599829 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.599841 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.702472 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.702531 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.702568 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.702585 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.702597 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.722137 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:35 crc kubenswrapper[4743]: E1123 00:07:35.722351 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.806025 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.806098 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.806117 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.806146 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.806166 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.909316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.909364 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.909374 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.909392 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:35 crc kubenswrapper[4743]: I1123 00:07:35.909404 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:35Z","lastTransitionTime":"2025-11-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.012513 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.012564 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.012573 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.012592 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.012605 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.116266 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.116341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.116360 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.116384 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.116403 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.220032 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.220117 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.220138 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.220171 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.220192 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.323359 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.323405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.323415 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.323432 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.323443 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.427557 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.427636 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.427655 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.427685 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.427704 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.531807 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.531880 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.531898 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.531924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.531945 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.635679 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.635736 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.635753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.635781 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.635799 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.722392 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.722443 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.722634 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.722746 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.722864 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.723079 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.739419 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.739480 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.739524 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.739551 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.739571 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.842265 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.842330 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.842343 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.842360 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.842370 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.904665 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.904709 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.904726 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.904747 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.904760 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.926250 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.930683 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.930798 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.930861 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.930937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.930998 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.944752 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.949736 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.949776 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.949786 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.949808 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.949820 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.965773 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.971016 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.971189 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.971284 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.971382 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:36 crc kubenswrapper[4743]: I1123 00:07:36.971469 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:36Z","lastTransitionTime":"2025-11-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:36 crc kubenswrapper[4743]: E1123 00:07:36.997720 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.003983 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.004044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.004059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.004109 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.004124 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: E1123 00:07:37.021691 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:37Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:37 crc kubenswrapper[4743]: E1123 00:07:37.021938 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.024696 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.024753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.024772 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.024801 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.024830 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.127794 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.127875 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.127898 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.127926 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.127947 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.231215 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.231293 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.231316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.231344 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.231362 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.334639 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.334732 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.334773 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.334808 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.334834 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.438570 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.438642 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.438665 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.438692 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.438710 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.541289 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.541361 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.541378 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.541412 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.541430 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.643854 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.643920 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.643939 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.643965 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.643983 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.721524 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:37 crc kubenswrapper[4743]: E1123 00:07:37.721753 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.747007 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.747049 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.747060 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.747075 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.747085 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.849689 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.849780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.849803 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.849829 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.849849 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.952582 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.952652 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.952669 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.952694 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:37 crc kubenswrapper[4743]: I1123 00:07:37.952713 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:37Z","lastTransitionTime":"2025-11-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.056257 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.056722 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.056867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.057055 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.057279 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.161533 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.161672 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.161692 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.161725 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.161746 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.264975 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.265042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.265060 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.265087 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.265107 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.368194 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.368259 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.368276 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.368305 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.368329 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.471865 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.471946 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.471964 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.472039 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.472058 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.574855 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.574924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.574942 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.574967 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.574986 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.678591 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.678643 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.678659 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.678682 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.678699 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.721286 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.721560 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.721539 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:38 crc kubenswrapper[4743]: E1123 00:07:38.721737 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:38 crc kubenswrapper[4743]: E1123 00:07:38.721569 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:38 crc kubenswrapper[4743]: E1123 00:07:38.722014 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.745302 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.765142 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.781461 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.781553 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.781572 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.781597 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.781614 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.792562 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.822011 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.846362 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.868397 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.885644 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.885972 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.886108 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.886247 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.886392 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.891231 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.908608 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.930693 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.945660 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.964622 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.976792 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.989769 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.989838 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.989856 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.989886 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.989903 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:38Z","lastTransitionTime":"2025-11-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:38 crc kubenswrapper[4743]: I1123 00:07:38.990537 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:38Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.008784 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:39Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.022229 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:39Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.049556 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:39Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.093101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.093144 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.093157 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.093177 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.093194 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.197023 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.197084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.197099 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.197124 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.197138 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.300174 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.300235 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.300253 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.300282 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.300301 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.403567 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.403610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.403627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.403649 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.403669 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.507563 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.507610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.507622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.507641 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.507652 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.610895 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.610957 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.610975 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.610999 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.611017 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.714548 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.714605 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.714618 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.714637 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.714650 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.721832 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:39 crc kubenswrapper[4743]: E1123 00:07:39.722047 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.817703 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.818174 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.818360 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.818530 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.818704 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.922405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.922755 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.922922 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.923078 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:39 crc kubenswrapper[4743]: I1123 00:07:39.923396 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:39Z","lastTransitionTime":"2025-11-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.027897 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.027970 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.027989 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.028019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.028040 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.133470 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.133593 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.133613 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.133655 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.133676 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.237410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.237472 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.237516 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.237541 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.237559 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.341341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.341405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.341423 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.341449 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.341471 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.444910 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.444955 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.444967 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.444988 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.445004 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.548042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.548093 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.548107 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.548128 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.548142 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.651224 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.651283 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.651296 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.651322 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.651336 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.721651 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.721773 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.721680 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:40 crc kubenswrapper[4743]: E1123 00:07:40.721839 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:40 crc kubenswrapper[4743]: E1123 00:07:40.721971 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:40 crc kubenswrapper[4743]: E1123 00:07:40.722096 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.755022 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.755081 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.755094 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.755119 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.755137 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.858743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.858802 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.858814 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.858834 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.858849 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.961792 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.961950 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.961969 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.961989 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:40 crc kubenswrapper[4743]: I1123 00:07:40.962004 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:40Z","lastTransitionTime":"2025-11-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.065660 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.065734 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.065753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.065780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.065801 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.169477 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.169562 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.169572 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.169594 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.169608 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.272372 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.272600 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.272623 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.272654 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.272681 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.376930 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.377388 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.377614 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.377812 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.378008 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.482651 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.482737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.482755 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.482789 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.482810 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.586887 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.587058 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.587086 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.587166 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.587194 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.691209 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.691283 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.691301 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.691328 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.691353 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.721922 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:41 crc kubenswrapper[4743]: E1123 00:07:41.722358 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.723914 4743 scope.go:117] "RemoveContainer" containerID="7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.796334 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.796437 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.796466 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.796545 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.796574 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.899625 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.899674 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.899691 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.899722 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:41 crc kubenswrapper[4743]: I1123 00:07:41.899742 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:41Z","lastTransitionTime":"2025-11-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.002999 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.003068 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.003085 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.003109 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.003127 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.111686 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.111788 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.111819 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.111951 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.112019 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.149010 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/1.log" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.152831 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.154420 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.190853 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.215847 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.215915 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.215927 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.215950 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.215984 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.224030 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.244136 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.265142 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.287095 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.311379 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.318190 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.318223 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.318235 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.318255 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.318266 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.333885 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.349719 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.365866 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.379448 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.396323 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.409666 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.422857 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.422909 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.422923 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.422942 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.422954 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.424248 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.442099 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.449741 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.449938 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.450016 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:07:58.449996418 +0000 UTC m=+70.528094545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.456127 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.470014 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.525665 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.525714 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.525727 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.525746 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.525759 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.628840 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.628903 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.628915 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.628933 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.628945 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.651625 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.651769 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.651816 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.651891 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.651978 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.651997 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652014 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.651891 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:08:14.651842261 +0000 UTC m=+86.729940408 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.652169 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652273 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:08:14.652258891 +0000 UTC m=+86.730357128 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652329 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:08:14.652318553 +0000 UTC m=+86.730416800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.652382 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652399 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652450 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652475 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652627 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:08:14.65259702 +0000 UTC m=+86.730695177 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652652 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.652752 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:08:14.652725683 +0000 UTC m=+86.730824040 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.721641 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.721731 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.721837 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.721971 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.722318 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:42 crc kubenswrapper[4743]: E1123 00:07:42.722549 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.731715 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.731780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.731797 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.731825 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.731843 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.835937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.835990 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.836002 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.836023 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.836039 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.938478 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.938578 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.938601 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.938628 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:42 crc kubenswrapper[4743]: I1123 00:07:42.938648 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:42Z","lastTransitionTime":"2025-11-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.041869 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.041923 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.041937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.041959 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.041974 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.149197 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.149268 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.149286 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.149315 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.149334 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.159177 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/2.log" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.160259 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/1.log" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.164137 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95" exitCode=1 Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.164204 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.164262 4743 scope.go:117] "RemoveContainer" containerID="7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.165481 4743 scope.go:117] "RemoveContainer" containerID="6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95" Nov 23 00:07:43 crc kubenswrapper[4743]: E1123 00:07:43.165898 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.177594 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.188716 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.198455 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.206322 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.228781 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.250767 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.254624 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.254913 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.254937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.255003 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.255028 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.270417 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.286132 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.318204 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.339099 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.358658 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.358689 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.358735 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.358956 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.358999 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.359020 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.373064 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.386599 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.401572 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.423969 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.437697 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.453644 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.462021 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.462086 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.462106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.462133 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.462156 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.474685 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.496346 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.515041 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.527697 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.539742 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.566190 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.566244 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.566255 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.566278 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.566290 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.575338 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c777503b6fb748b698d304db1b29396334c13383d67bcef23250cef4cfcd986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:27Z\\\",\\\"message\\\":\\\"1.903348ms, libovsdb time 1.18115ms\\\\nI1123 00:07:27.361216 6184 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf after 0 failed attempt(s)\\\\nI1123 00:07:27.361217 6184 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nF1123 00:07:27.359098 6184 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:27Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:27.361230 6184 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.596552 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.610079 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.623893 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.668968 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.670307 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.670364 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.670381 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.670401 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.670414 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.685293 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.701987 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.721579 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.721708 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: E1123 00:07:43.721860 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.735595 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.754287 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.770830 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.774287 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.774386 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.774407 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.774435 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.774458 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.784870 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.800153 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:43Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.878291 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.878370 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.878391 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.878421 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.878442 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.982202 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.982275 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.982294 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.982319 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:43 crc kubenswrapper[4743]: I1123 00:07:43.982343 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:43Z","lastTransitionTime":"2025-11-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.086478 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.086641 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.086661 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.086688 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.086707 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.179849 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/2.log" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.190079 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.190177 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.190198 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.190659 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.190703 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.190778 4743 scope.go:117] "RemoveContainer" containerID="6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95" Nov 23 00:07:44 crc kubenswrapper[4743]: E1123 00:07:44.191063 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.215859 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.233605 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.257752 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.279978 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.298984 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.299068 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.299087 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.299116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.299150 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.307327 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.324213 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.345162 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.361452 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.380676 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.398467 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.402532 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.402587 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.402602 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.402624 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.402640 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.415450 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.429855 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.451819 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.469855 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.490979 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.506125 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.506201 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.506238 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.506257 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.506268 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.511044 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.540050 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:44Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.610049 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.610116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.610135 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.610162 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.610182 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.713131 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.713207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.713226 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.713254 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.713271 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.721525 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.721587 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:44 crc kubenswrapper[4743]: E1123 00:07:44.721684 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.721605 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:44 crc kubenswrapper[4743]: E1123 00:07:44.721778 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:44 crc kubenswrapper[4743]: E1123 00:07:44.721894 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.817071 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.817140 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.817165 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.817196 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.817218 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.920917 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.921000 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.921019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.921044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:44 crc kubenswrapper[4743]: I1123 00:07:44.921062 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:44Z","lastTransitionTime":"2025-11-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.024421 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.024471 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.024507 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.024523 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.024532 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.127059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.127332 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.127356 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.127390 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.127421 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.230435 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.230552 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.230574 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.230602 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.230625 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.334040 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.334112 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.334132 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.334161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.334180 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.437327 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.437390 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.437405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.437431 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.437452 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.540531 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.540571 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.540580 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.540596 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.540607 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.644589 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.644666 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.644689 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.644719 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.644741 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.721220 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:45 crc kubenswrapper[4743]: E1123 00:07:45.721442 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.747794 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.747846 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.747856 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.747875 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.747888 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.850578 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.850657 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.850679 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.850709 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.850730 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.954039 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.954128 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.954144 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.954168 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:45 crc kubenswrapper[4743]: I1123 00:07:45.954182 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:45Z","lastTransitionTime":"2025-11-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.104693 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.104735 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.104749 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.104767 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.104778 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.207900 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.207967 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.207990 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.208029 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.208054 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.311275 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.311349 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.311368 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.311401 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.311421 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.415177 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.415219 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.415228 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.415248 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.415263 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.518759 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.518820 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.518836 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.518866 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.518885 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.622124 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.622164 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.622174 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.622190 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.622200 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.721666 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.721743 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:46 crc kubenswrapper[4743]: E1123 00:07:46.721835 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.721916 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:46 crc kubenswrapper[4743]: E1123 00:07:46.722044 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:46 crc kubenswrapper[4743]: E1123 00:07:46.722136 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.724390 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.724426 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.724435 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.724451 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.724463 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.827382 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.827441 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.827451 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.827472 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.827500 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.930792 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.930857 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.930874 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.930899 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:46 crc kubenswrapper[4743]: I1123 00:07:46.930917 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:46Z","lastTransitionTime":"2025-11-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.034778 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.034825 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.034839 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.034863 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.034878 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.138367 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.138440 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.138457 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.138503 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.138523 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.179439 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.179546 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.179566 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.179588 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.179603 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.196631 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.203521 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.203569 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.203585 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.203610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.203626 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.229752 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.236694 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.236765 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.236782 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.236808 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.236832 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.256049 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.260389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.260439 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.260450 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.260467 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.260478 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.274826 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.288389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.288457 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.288468 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.288509 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.288524 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.302364 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.302478 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.304453 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.304519 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.304532 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.304551 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.304565 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.407291 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.407343 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.407352 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.407373 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.407384 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.456727 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.471390 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.492647 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.505587 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.509722 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.509791 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.509807 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.509832 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.509846 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.518198 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.532743 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.547532 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.566520 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.598331 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.612590 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.612660 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.612674 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.612697 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.612712 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.618518 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.638987 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.655174 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.671416 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.692220 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.710081 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.715997 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.716032 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.716047 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.716069 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.716086 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.721646 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:47 crc kubenswrapper[4743]: E1123 00:07:47.721765 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.727448 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.745131 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.765663 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.818964 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.819014 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.819027 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.819044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.819057 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.921998 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.922084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.922104 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.922132 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:47 crc kubenswrapper[4743]: I1123 00:07:47.922151 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:47Z","lastTransitionTime":"2025-11-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.025241 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.025315 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.025340 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.025376 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.025404 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.129747 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.129854 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.129873 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.129903 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.129925 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.233121 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.233199 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.233219 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.233250 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.233272 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.338344 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.338431 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.338453 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.338533 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.338559 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.442982 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.443327 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.443454 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.443701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.443852 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.548725 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.549244 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.549379 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.549556 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.549698 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.652453 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.653207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.653355 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.653468 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.653820 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.721540 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.721731 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:48 crc kubenswrapper[4743]: E1123 00:07:48.721892 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.721936 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:48 crc kubenswrapper[4743]: E1123 00:07:48.722332 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:48 crc kubenswrapper[4743]: E1123 00:07:48.722448 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.743686 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.757367 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.757444 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.757464 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.757511 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.757533 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.766705 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.785376 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.814847 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.833738 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.859241 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.865583 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.865649 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.865685 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.865716 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.865736 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.880115 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.901809 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.919548 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.942426 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.963212 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.971146 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.971265 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.971285 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.971350 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.971378 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:48Z","lastTransitionTime":"2025-11-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:48 crc kubenswrapper[4743]: I1123 00:07:48.981889 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:48.999894 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:48Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.018085 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:49Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.035872 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:49Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.078394 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:49Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.081248 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.081303 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.081316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.081341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.081358 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.094860 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:49Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.185049 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.185106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.185120 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.185139 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.185154 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.289354 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.289434 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.289454 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.289521 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.289553 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.393701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.393753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.393773 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.393798 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.393819 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.497148 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.497198 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.497210 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.497230 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.497246 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.600181 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.600284 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.600312 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.600349 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.600372 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.703699 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.704683 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.704724 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.704751 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.704767 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.722229 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:49 crc kubenswrapper[4743]: E1123 00:07:49.722460 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.808735 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.808817 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.808838 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.808867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.808887 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.911629 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.911689 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.911701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.911719 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:49 crc kubenswrapper[4743]: I1123 00:07:49.911731 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:49Z","lastTransitionTime":"2025-11-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.016198 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.016460 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.016514 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.016544 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.016567 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.120685 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.120744 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.120755 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.120780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.120803 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.223702 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.223756 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.223767 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.223787 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.223799 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.326253 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.326323 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.326338 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.326360 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.326373 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.429625 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.429695 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.429715 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.429744 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.429765 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.533622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.533697 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.533709 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.533728 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.533758 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.637071 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.637120 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.637133 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.637155 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.637171 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.721721 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.721858 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:50 crc kubenswrapper[4743]: E1123 00:07:50.721922 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.721872 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:50 crc kubenswrapper[4743]: E1123 00:07:50.722109 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:50 crc kubenswrapper[4743]: E1123 00:07:50.722213 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.740191 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.740248 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.740266 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.740294 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.740311 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.843060 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.843122 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.843142 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.843167 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.843186 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.946391 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.946679 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.946699 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.946730 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:50 crc kubenswrapper[4743]: I1123 00:07:50.946749 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:50Z","lastTransitionTime":"2025-11-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.049450 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.049525 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.049537 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.049558 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.049572 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.152123 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.152166 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.152176 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.152192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.152203 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.254393 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.254447 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.254460 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.254478 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.254533 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.356737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.356775 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.356790 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.356812 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.356826 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.459721 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.459769 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.459780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.459798 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.459812 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.562659 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.563224 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.563242 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.563269 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.563287 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.667476 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.667610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.667619 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.667637 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.667649 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.721897 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:51 crc kubenswrapper[4743]: E1123 00:07:51.722144 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.770465 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.770533 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.770542 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.770560 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.770573 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.872969 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.873020 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.873031 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.873054 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.873069 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.977651 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.977712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.977735 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.977770 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:51 crc kubenswrapper[4743]: I1123 00:07:51.977806 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:51Z","lastTransitionTime":"2025-11-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.082044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.082116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.082134 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.082160 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.082179 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.186830 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.186895 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.186913 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.186941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.186960 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.290684 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.290757 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.290769 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.290794 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.290809 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.393941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.394047 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.394074 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.394113 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.394146 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.497348 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.497395 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.497405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.497421 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.497431 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.600392 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.600521 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.600555 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.600589 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.600612 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.703729 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.703867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.703892 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.703917 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.703938 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.722232 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.722255 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:52 crc kubenswrapper[4743]: E1123 00:07:52.722398 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:52 crc kubenswrapper[4743]: E1123 00:07:52.722582 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.727416 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:52 crc kubenswrapper[4743]: E1123 00:07:52.727625 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.806962 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.807015 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.807028 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.807050 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.807063 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.909591 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.909652 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.909667 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.909693 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:52 crc kubenswrapper[4743]: I1123 00:07:52.909710 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:52Z","lastTransitionTime":"2025-11-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.012940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.012998 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.013012 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.013042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.013056 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.115993 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.116045 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.116057 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.116076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.116093 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.218186 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.218284 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.218305 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.218337 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.218354 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.320579 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.320625 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.320635 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.320652 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.320666 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.424137 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.424185 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.424202 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.424222 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.424236 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.528318 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.528373 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.528388 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.528408 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.528421 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.632164 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.632241 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.632267 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.632298 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.632320 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.721843 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:53 crc kubenswrapper[4743]: E1123 00:07:53.722053 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.735112 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.735173 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.735187 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.735212 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.735230 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.837732 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.837820 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.837850 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.837884 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.837908 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.940436 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.940521 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.940540 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.940564 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:53 crc kubenswrapper[4743]: I1123 00:07:53.940587 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:53Z","lastTransitionTime":"2025-11-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.044117 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.044188 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.044207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.044233 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.044253 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.147945 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.148001 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.148021 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.148046 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.148064 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.251885 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.251954 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.251972 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.251999 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.252017 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.354768 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.354837 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.354861 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.354896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.354923 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.462270 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.462343 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.462358 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.462381 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.462396 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.564955 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.565015 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.565025 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.565043 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.565055 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.668064 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.668146 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.668164 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.668192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.668211 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.724743 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.724810 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.724836 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:54 crc kubenswrapper[4743]: E1123 00:07:54.724950 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:54 crc kubenswrapper[4743]: E1123 00:07:54.725087 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:54 crc kubenswrapper[4743]: E1123 00:07:54.725193 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.771018 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.771053 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.771064 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.771078 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.771088 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.873906 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.873962 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.873973 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.873992 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.874007 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.977644 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.977701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.977714 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.977735 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:54 crc kubenswrapper[4743]: I1123 00:07:54.977747 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:54Z","lastTransitionTime":"2025-11-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.080617 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.080678 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.080688 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.080710 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.080753 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.182976 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.183039 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.183050 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.183068 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.183083 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.286133 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.286183 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.286192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.286211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.286228 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.389094 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.389137 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.389146 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.389165 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.389177 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.493195 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.493252 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.493267 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.493290 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.493304 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.598328 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.598386 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.598397 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.598416 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.598428 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.701636 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.701750 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.701771 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.701794 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.701837 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.721698 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:55 crc kubenswrapper[4743]: E1123 00:07:55.721872 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.814184 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.814239 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.814254 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.814272 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.814284 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.917499 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.917564 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.917576 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.917600 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:55 crc kubenswrapper[4743]: I1123 00:07:55.917614 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:55Z","lastTransitionTime":"2025-11-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.021000 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.021062 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.021076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.021101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.021113 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.124088 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.124129 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.124138 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.124153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.124163 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.226677 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.226747 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.226756 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.226775 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.226790 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.329638 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.329699 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.329709 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.329727 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.329741 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.432777 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.432846 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.432867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.432896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.432918 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.535622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.535685 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.535696 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.535712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.535726 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.638830 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.638881 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.638892 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.638912 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.638924 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.721874 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.721930 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.722132 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:56 crc kubenswrapper[4743]: E1123 00:07:56.722242 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:56 crc kubenswrapper[4743]: E1123 00:07:56.722087 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:56 crc kubenswrapper[4743]: E1123 00:07:56.722415 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.723238 4743 scope.go:117] "RemoveContainer" containerID="6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95" Nov 23 00:07:56 crc kubenswrapper[4743]: E1123 00:07:56.723419 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.741201 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.741255 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.741266 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.741283 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.741298 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.843869 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.843919 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.843932 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.843953 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.843967 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.947022 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.947099 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.947115 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.947142 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:56 crc kubenswrapper[4743]: I1123 00:07:56.947162 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:56Z","lastTransitionTime":"2025-11-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.049698 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.049762 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.049780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.049805 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.049825 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.152829 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.152893 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.152909 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.152933 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.152950 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.255857 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.255910 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.255924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.255943 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.255958 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.359012 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.359091 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.359114 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.359138 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.359156 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.461834 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.461876 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.461886 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.461903 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.461917 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.564946 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.565000 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.565011 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.565035 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.565048 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.582599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.582790 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.582847 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.582880 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.582899 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.599773 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:57Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.604540 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.604605 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.604620 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.604640 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.604667 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.617648 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:57Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.621793 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.621838 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.621848 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.621867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.621880 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.637360 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:57Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.641457 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.641520 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.641532 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.641560 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.641575 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.654720 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:57Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.659833 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.659874 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.659884 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.659923 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.659936 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.671767 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:57Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.671895 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.674084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.674149 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.674164 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.674184 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.674219 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.721960 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:57 crc kubenswrapper[4743]: E1123 00:07:57.722134 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.778135 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.778184 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.778197 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.778217 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.778231 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.880989 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.881090 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.881122 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.881161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.881187 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.984937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.985007 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.985026 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.985052 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:57 crc kubenswrapper[4743]: I1123 00:07:57.985069 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:57Z","lastTransitionTime":"2025-11-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.088979 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.089047 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.089062 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.089083 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.089118 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.192995 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.193057 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.193089 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.193113 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.193127 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.297359 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.297410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.297435 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.297454 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.297465 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.400940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.401016 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.401029 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.401049 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.401062 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.451997 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:58 crc kubenswrapper[4743]: E1123 00:07:58.452226 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:58 crc kubenswrapper[4743]: E1123 00:07:58.452339 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:08:30.452317191 +0000 UTC m=+102.530415318 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.503849 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.503937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.503963 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.503994 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.504019 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.607352 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.607404 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.607417 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.607434 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.607445 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.710437 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.710526 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.710540 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.710559 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.710571 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.721712 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.721803 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:07:58 crc kubenswrapper[4743]: E1123 00:07:58.721900 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.721810 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:07:58 crc kubenswrapper[4743]: E1123 00:07:58.721986 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:07:58 crc kubenswrapper[4743]: E1123 00:07:58.722082 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.735655 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.757697 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.774165 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.786086 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.798079 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.813842 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.813878 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.813892 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.813911 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.813925 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.828744 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.844904 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.859288 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.873534 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.887656 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.904475 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.917059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.917102 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.917115 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.917134 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.917150 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:58Z","lastTransitionTime":"2025-11-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.918310 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.935456 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.953360 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.970058 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:58 crc kubenswrapper[4743]: I1123 00:07:58.987400 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.002264 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:58Z is after 2025-08-24T17:21:41Z" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.020774 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.020824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.020840 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.020862 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.020881 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.124889 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.125013 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.125038 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.125068 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.125089 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.233688 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.233927 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.233948 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.233976 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.233998 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.337251 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.337309 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.337326 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.337353 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.337371 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.440867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.440940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.440959 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.440986 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.441003 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.544333 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.544386 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.544396 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.544415 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.544428 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.648105 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.648179 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.648373 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.648408 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.648425 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.722164 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:07:59 crc kubenswrapper[4743]: E1123 00:07:59.722480 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.751906 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.751959 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.751977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.751999 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.752017 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.855346 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.855410 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.855428 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.855455 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.855475 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.958815 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.958870 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.958887 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.958909 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:07:59 crc kubenswrapper[4743]: I1123 00:07:59.958925 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:07:59Z","lastTransitionTime":"2025-11-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.062364 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.062418 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.062432 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.062455 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.062468 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.166131 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.166192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.166204 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.166225 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.166239 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.252130 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/0.log" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.252185 4743 generic.go:334] "Generic (PLEG): container finished" podID="b0418df6-be6b-459c-8685-770bc9c99a0e" containerID="c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba" exitCode=1 Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.252220 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerDied","Data":"c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.252711 4743 scope.go:117] "RemoveContainer" containerID="c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.269686 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.269726 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.269737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.269755 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.269768 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.280047 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.296275 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.308785 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.321864 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.338674 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.356601 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.373118 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.373174 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.373194 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.373232 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.373252 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.376774 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.394123 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.418346 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.432668 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.445314 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.457855 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.475891 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.475946 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.475957 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.475978 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.475994 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.476458 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.491849 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.506204 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.527027 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.545076 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:00Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.579910 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.579986 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.580006 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.580033 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.580055 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.683456 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.683524 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.683538 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.683558 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.683573 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.721955 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.722013 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.722195 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:00 crc kubenswrapper[4743]: E1123 00:08:00.722212 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:00 crc kubenswrapper[4743]: E1123 00:08:00.722331 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:00 crc kubenswrapper[4743]: E1123 00:08:00.722433 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.786458 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.786559 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.786578 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.786606 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.786655 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.889032 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.889080 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.889088 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.889105 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.889116 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.992737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.992787 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.992801 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.992821 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:00 crc kubenswrapper[4743]: I1123 00:08:00.992831 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:00Z","lastTransitionTime":"2025-11-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.095707 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.095753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.095764 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.095782 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.095796 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.198926 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.198984 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.198998 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.199019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.199336 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.257828 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/0.log" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.257890 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerStarted","Data":"a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.274699 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.287818 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.302323 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.302294 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.302414 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.302853 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.302894 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.302915 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.319088 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.335782 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.348626 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.360926 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.377649 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.397335 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.405719 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.405781 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.405791 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.405836 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.405851 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.409974 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.440766 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.458278 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.476116 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.493047 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.507864 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.507923 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.507947 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.507980 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.508004 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.511637 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.533919 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.558766 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:01Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.632764 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.632815 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.632826 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.632843 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.632855 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.721569 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:01 crc kubenswrapper[4743]: E1123 00:08:01.721756 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.735930 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.735965 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.735975 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.735990 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.736001 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.738143 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.838542 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.838593 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.838604 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.838626 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.838643 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.941657 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.941902 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.941914 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.941937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:01 crc kubenswrapper[4743]: I1123 00:08:01.941951 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:01Z","lastTransitionTime":"2025-11-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.044478 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.044547 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.044558 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.044578 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.044591 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.146934 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.146977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.146988 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.147006 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.147017 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.250601 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.250670 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.250682 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.250706 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.250720 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.352742 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.352817 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.352835 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.352861 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.352885 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.460544 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.460647 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.460670 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.460965 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.460989 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.564827 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.564899 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.564921 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.564950 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.564969 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.667742 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.667800 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.667812 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.667831 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.667843 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.721345 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.721372 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.721526 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:02 crc kubenswrapper[4743]: E1123 00:08:02.721680 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:02 crc kubenswrapper[4743]: E1123 00:08:02.721887 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:02 crc kubenswrapper[4743]: E1123 00:08:02.721971 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.770817 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.770867 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.770878 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.770893 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.770904 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.873778 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.873823 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.873836 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.873854 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.873866 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.977397 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.977463 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.977506 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.977535 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:02 crc kubenswrapper[4743]: I1123 00:08:02.977554 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:02Z","lastTransitionTime":"2025-11-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.081201 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.081259 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.081276 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.081302 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.081323 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.185434 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.185599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.185634 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.185669 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.185695 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.289649 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.289712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.289730 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.289758 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.289776 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.394213 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.394281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.394299 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.394326 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.394343 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.497740 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.497800 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.497821 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.497847 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.497868 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.599952 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.599997 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.600008 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.600025 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.600038 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.702546 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.702581 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.702593 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.702611 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.702626 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.721536 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:03 crc kubenswrapper[4743]: E1123 00:08:03.721647 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.805361 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.805409 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.805419 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.805439 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.805452 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.907868 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.907908 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.907919 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.907937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:03 crc kubenswrapper[4743]: I1123 00:08:03.907952 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:03Z","lastTransitionTime":"2025-11-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.010655 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.010728 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.010746 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.010770 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.010791 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.113909 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.113960 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.113972 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.113989 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.114001 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.216699 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.216743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.216755 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.216774 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.216787 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.319413 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.319500 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.319515 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.319539 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.319554 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.423211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.423286 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.423303 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.423341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.423361 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.526704 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.526768 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.526783 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.526807 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.526826 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.630101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.630178 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.630196 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.630232 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.630252 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.721787 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:04 crc kubenswrapper[4743]: E1123 00:08:04.722005 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.722302 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:04 crc kubenswrapper[4743]: E1123 00:08:04.722422 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.722677 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:04 crc kubenswrapper[4743]: E1123 00:08:04.722750 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.732517 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.732548 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.732558 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.732573 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.732587 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.835435 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.835561 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.835594 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.835627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.835659 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.938824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.938862 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.938870 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.938885 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:04 crc kubenswrapper[4743]: I1123 00:08:04.938898 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:04Z","lastTransitionTime":"2025-11-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.042341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.042389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.042399 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.042417 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.042432 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.145269 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.145350 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.145366 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.145389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.145403 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.248116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.248198 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.248213 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.248235 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.248246 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.350862 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.350962 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.350987 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.351021 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.351044 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.454861 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.455000 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.455020 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.455094 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.455114 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.560137 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.560185 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.560196 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.560213 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.560226 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.663377 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.663436 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.663449 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.663470 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.663501 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.721309 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:05 crc kubenswrapper[4743]: E1123 00:08:05.721636 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.767067 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.767175 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.767200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.767234 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.767254 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.871990 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.872056 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.872074 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.872101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.872119 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.975357 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.975452 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.975473 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.975553 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:05 crc kubenswrapper[4743]: I1123 00:08:05.975572 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:05Z","lastTransitionTime":"2025-11-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.079478 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.079585 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.079606 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.079639 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.079661 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.183198 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.183263 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.183276 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.183365 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.183383 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.286448 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.286518 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.286537 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.286612 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.286627 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.389342 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.389415 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.389426 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.389444 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.389456 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.493271 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.493342 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.493356 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.493383 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.493402 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.596727 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.596954 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.596977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.597003 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.597024 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.700720 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.700773 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.700782 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.700843 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.700857 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.722176 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:06 crc kubenswrapper[4743]: E1123 00:08:06.722316 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.722379 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.722379 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:06 crc kubenswrapper[4743]: E1123 00:08:06.722612 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:06 crc kubenswrapper[4743]: E1123 00:08:06.722637 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.803830 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.803894 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.803907 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.803929 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.803942 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.906254 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.906295 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.906304 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.906320 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:06 crc kubenswrapper[4743]: I1123 00:08:06.906331 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:06Z","lastTransitionTime":"2025-11-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.009160 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.009237 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.009254 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.009280 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.009296 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.112103 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.112140 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.112175 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.112200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.112216 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.215118 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.215174 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.215187 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.215208 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.215222 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.319038 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.319569 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.319589 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.319614 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.319631 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.422477 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.422564 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.422578 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.422602 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.422619 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.525970 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.526020 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.526033 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.526051 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.526065 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.628857 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.628905 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.628913 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.628933 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.628944 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.717401 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.717447 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.717456 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.717472 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.717498 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.721753 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.721900 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.734822 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:07Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.739289 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.739364 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.739378 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.739400 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.739414 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.755877 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:07Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.759770 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.759849 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.760049 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.760083 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.760106 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.776578 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:07Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.781203 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.781287 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.781311 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.781344 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.781367 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.804149 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:07Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.808390 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.808426 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.808440 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.808462 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.808480 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.821023 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:07Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:07 crc kubenswrapper[4743]: E1123 00:08:07.821181 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.823080 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.823149 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.823174 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.823202 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.823220 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.925822 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.925891 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.925908 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.925933 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:07 crc kubenswrapper[4743]: I1123 00:08:07.925952 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:07Z","lastTransitionTime":"2025-11-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.029176 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.029240 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.029260 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.029281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.029292 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.132436 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.132552 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.132571 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.132597 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.132615 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.236876 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.236981 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.237000 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.237057 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.237076 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.340588 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.340642 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.340660 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.340683 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.340698 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.445466 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.445772 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.445793 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.445824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.445845 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.549430 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.549513 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.549525 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.549561 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.549574 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.651831 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.651874 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.651886 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.651901 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.651911 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.721946 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.722009 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.721946 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:08 crc kubenswrapper[4743]: E1123 00:08:08.722213 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:08 crc kubenswrapper[4743]: E1123 00:08:08.722602 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:08 crc kubenswrapper[4743]: E1123 00:08:08.722262 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.746449 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.755031 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.755146 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.755158 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.755183 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.755197 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.773067 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.786585 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.800754 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.816556 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.833364 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.848470 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.857864 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.857908 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.857925 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.857951 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.857968 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.868703 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.884054 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.900460 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.915838 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.931723 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.948220 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.962176 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.962221 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.962235 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.962255 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.962266 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:08Z","lastTransitionTime":"2025-11-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.964876 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.978365 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dfeb5cb-3737-4c27-95ad-6d780b3d17dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e56c77b063825f42a5134699a4e67ab4bb0f3f48f7fa7521e091156c6f63504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:08 crc kubenswrapper[4743]: I1123 00:08:08.993849 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:08Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.007028 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:09Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.019819 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:09Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.065077 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.065420 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.065514 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.065748 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.065791 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.169007 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.169060 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.169077 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.169099 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.169114 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.272130 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.272186 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.272200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.272221 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.272237 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.375605 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.375656 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.375666 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.375683 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.375696 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.478824 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.478880 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.478892 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.478911 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.478921 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.582705 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.582798 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.582822 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.582855 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.582877 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.687053 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.687116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.687130 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.687156 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.687173 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.721830 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:09 crc kubenswrapper[4743]: E1123 00:08:09.722083 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.723371 4743 scope.go:117] "RemoveContainer" containerID="6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.790680 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.790751 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.790775 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.790808 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.790826 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.894341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.894385 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.894398 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.894421 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.894437 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.997592 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.997640 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.997652 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.997669 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:09 crc kubenswrapper[4743]: I1123 00:08:09.997683 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:09Z","lastTransitionTime":"2025-11-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.101393 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.101464 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.101514 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.101544 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.101562 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.206059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.206181 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.206211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.206251 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.206278 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.297090 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/2.log" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.300761 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.301895 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.309130 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.309181 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.309198 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.309224 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.309242 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.324934 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.345291 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.364420 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.380378 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.395749 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.409554 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.411643 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.411694 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.411712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.411734 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.411747 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.427896 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.444416 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dfeb5cb-3737-4c27-95ad-6d780b3d17dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e56c77b063825f42a5134699a4e67ab4bb0f3f48f7fa7521e091156c6f63504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.468192 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.490102 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.513404 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.515316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.515363 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.515376 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.515396 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.515410 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.535660 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.549978 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.562443 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.573745 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.591107 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.603777 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.616216 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.617942 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.618008 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.618019 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.618039 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.618052 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721015 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721071 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721087 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721097 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721204 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721234 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.721258 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:10 crc kubenswrapper[4743]: E1123 00:08:10.721348 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:10 crc kubenswrapper[4743]: E1123 00:08:10.721438 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:10 crc kubenswrapper[4743]: E1123 00:08:10.721581 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.823178 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.823232 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.823245 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.823264 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.823277 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.926038 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.926076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.926084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.926099 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:10 crc kubenswrapper[4743]: I1123 00:08:10.926109 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:10Z","lastTransitionTime":"2025-11-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.030331 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.030412 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.030432 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.030459 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.030478 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.134226 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.134314 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.134334 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.134361 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.134380 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.237845 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.237896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.237912 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.237935 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.237950 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.311138 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/3.log" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.314337 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/2.log" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.319113 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" exitCode=1 Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.319179 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.319228 4743 scope.go:117] "RemoveContainer" containerID="6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.321997 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:08:11 crc kubenswrapper[4743]: E1123 00:08:11.323598 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.341081 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.341231 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.341256 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.341321 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.341345 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.343339 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.365932 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.379711 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.406789 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5f6aac4f04ba257a0f392cd1a549f21411fc8e02797180f12d36641d55cd95\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:42Z\\\",\\\"message\\\":\\\"g new object: *v1.Pod openshift-dns/node-resolver-kvwqd\\\\nI1123 00:07:42.706934 6409 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-kvwqd in node crc\\\\nI1123 00:07:42.706907 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1123 00:07:42.706939 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1123 00:07:42.706949 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:07:42Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:07:42.706953 6409 obj_retry.go:303] Retry object \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:08:10Z\\\",\\\"message\\\":\\\"kube-scheduler-crc openshift-machine-config-operator/machine-config-daemon-cxtxv]\\\\nF1123 00:08:10.766926 6795 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:08:10.766936 6795 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 00:08:10.766959 6795 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-cxtxv\\\\nI1123 00:08:10.766957 6795 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.422058 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.444468 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.444549 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.444561 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.444579 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.444589 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.445010 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.464620 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.482585 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.503142 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.523121 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.538358 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dfeb5cb-3737-4c27-95ad-6d780b3d17dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e56c77b063825f42a5134699a4e67ab4bb0f3f48f7fa7521e091156c6f63504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.547272 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.547316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.547330 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.547351 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.547365 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.560101 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.575612 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.590793 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.607256 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.623388 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.638965 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.651047 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.651108 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.651126 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.651155 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.651175 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.721990 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:11 crc kubenswrapper[4743]: E1123 00:08:11.722205 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.753627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.753719 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.753749 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.753782 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.753805 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.819937 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:11Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.856697 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.856752 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.856770 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.856796 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.856814 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.959975 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.960033 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.960051 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.960079 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:11 crc kubenswrapper[4743]: I1123 00:08:11.960096 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:11Z","lastTransitionTime":"2025-11-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.063020 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.063100 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.063119 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.063149 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.063174 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.166852 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.166929 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.166954 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.166986 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.167015 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.270552 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.270623 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.270643 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.270672 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.270691 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.325966 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/3.log" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.331882 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:08:12 crc kubenswrapper[4743]: E1123 00:08:12.332173 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.357200 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.373940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.374008 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.374026 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.374054 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.374072 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.383023 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.403743 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.433114 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.454556 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.477405 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.495123 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dfeb5cb-3737-4c27-95ad-6d780b3d17dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e56c77b063825f42a5134699a4e67ab4bb0f3f48f7fa7521e091156c6f63504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.498233 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.498270 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.498283 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.498303 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.498316 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.515629 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.539957 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.558846 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.577144 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.601008 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.601808 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.601843 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.601854 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.601871 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.601882 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.618379 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.632426 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.667999 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:08:10Z\\\",\\\"message\\\":\\\"kube-scheduler-crc openshift-machine-config-operator/machine-config-daemon-cxtxv]\\\\nF1123 00:08:10.766926 6795 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:08:10.766936 6795 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 00:08:10.766959 6795 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-cxtxv\\\\nI1123 00:08:10.766957 6795 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:08:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.686279 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.706900 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.706971 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.706990 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.707021 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.707040 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.707028 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.722282 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.722394 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.722306 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:12 crc kubenswrapper[4743]: E1123 00:08:12.722549 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:12 crc kubenswrapper[4743]: E1123 00:08:12.722708 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:12 crc kubenswrapper[4743]: E1123 00:08:12.722832 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.724872 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:12Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.810255 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.810326 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.810348 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.810428 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.810456 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.914747 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.915264 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.915288 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.915318 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:12 crc kubenswrapper[4743]: I1123 00:08:12.915338 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:12Z","lastTransitionTime":"2025-11-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.019346 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.019450 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.019470 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.019553 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.019574 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.123393 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.123481 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.123532 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.123563 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.123585 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.226631 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.226689 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.226701 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.226718 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.226731 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.330363 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.331035 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.331138 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.331228 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.331309 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.434975 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.435030 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.435050 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.435080 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.435095 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.538773 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.538862 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.538888 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.538924 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.538945 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.641455 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.641588 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.641609 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.641638 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.641661 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.721664 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:13 crc kubenswrapper[4743]: E1123 00:08:13.722067 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.745129 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.747428 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.747488 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.747501 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.747551 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.747569 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.850618 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.850667 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.850677 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.850695 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.850706 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.953666 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.953748 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.953766 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.953797 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:13 crc kubenswrapper[4743]: I1123 00:08:13.953813 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:13Z","lastTransitionTime":"2025-11-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.057531 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.057585 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.057594 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.057614 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.057629 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.161012 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.161102 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.161122 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.161152 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.161176 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.263670 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.263753 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.263774 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.263802 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.263822 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.367263 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.367322 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.367362 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.367383 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.367398 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.470598 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.470680 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.470704 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.470737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.470760 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.574746 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.574834 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.574956 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.574998 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.575024 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.679432 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.679565 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.679589 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.679622 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.679644 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.721839 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.721934 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.722061 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.722214 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.722362 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.722625 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.749619 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.749787 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.749909 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.749859814 +0000 UTC m=+150.827958021 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.749980 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750012 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750035 4743 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.750055 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750107 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.750080859 +0000 UTC m=+150.828179026 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.750168 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.750212 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750300 4743 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750351 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750352 4743 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750371 4743 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750399 4743 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750410 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.750386097 +0000 UTC m=+150.828484424 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750469 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.750429238 +0000 UTC m=+150.828527585 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 00:08:14 crc kubenswrapper[4743]: E1123 00:08:14.750564 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.750541491 +0000 UTC m=+150.828639858 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.783200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.783243 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.783258 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.783278 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.783295 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.887143 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.887209 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.887231 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.887261 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.887282 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.991036 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.991109 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.991127 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.991153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:14 crc kubenswrapper[4743]: I1123 00:08:14.991170 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:14Z","lastTransitionTime":"2025-11-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.094534 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.094599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.094619 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.094647 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.094667 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.198484 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.198587 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.198605 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.198629 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.198647 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.303627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.303686 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.303704 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.303732 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.303753 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.408281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.408352 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.408366 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.408386 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.408399 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.512329 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.512387 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.512398 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.512416 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.512427 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.615299 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.615382 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.615396 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.615417 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.615430 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.719249 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.719357 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.719372 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.719393 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.719407 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.722011 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:15 crc kubenswrapper[4743]: E1123 00:08:15.722420 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.823552 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.823626 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.823646 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.823678 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.823700 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.928119 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.928219 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.928246 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.928279 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:15 crc kubenswrapper[4743]: I1123 00:08:15.928299 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:15Z","lastTransitionTime":"2025-11-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.031122 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.031166 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.031182 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.031200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.031211 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.134586 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.134644 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.134656 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.134677 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.134691 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.237987 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.238063 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.238083 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.238110 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.238123 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.341345 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.341424 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.341443 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.341472 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.341531 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.445226 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.445310 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.445328 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.445353 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.445370 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.548116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.548165 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.548178 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.548200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.548216 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.651287 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.651368 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.651393 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.651424 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.651447 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.721427 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.721551 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.721460 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:16 crc kubenswrapper[4743]: E1123 00:08:16.721690 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:16 crc kubenswrapper[4743]: E1123 00:08:16.721774 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:16 crc kubenswrapper[4743]: E1123 00:08:16.721858 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.754401 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.754434 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.754447 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.754463 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.754475 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.858002 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.858056 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.858073 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.858096 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.858113 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.961400 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.961456 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.961466 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.961484 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:16 crc kubenswrapper[4743]: I1123 00:08:16.961509 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:16Z","lastTransitionTime":"2025-11-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.064971 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.065059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.065075 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.065102 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.065119 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.169292 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.169356 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.169368 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.169389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.169400 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.273357 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.273409 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.273419 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.273441 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.273452 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.376298 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.376368 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.376389 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.376417 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.376437 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.479473 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.479568 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.479586 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.479610 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.479629 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.582535 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.582606 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.582627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.582654 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.582671 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.685842 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.685902 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.685920 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.685945 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.685990 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.721985 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:17 crc kubenswrapper[4743]: E1123 00:08:17.722207 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.789016 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.789101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.789125 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.789158 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.789180 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.892354 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.892558 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.892584 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.892618 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.892644 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.995751 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.995813 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.995834 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.995858 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:17 crc kubenswrapper[4743]: I1123 00:08:17.995874 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:17Z","lastTransitionTime":"2025-11-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.009353 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.009414 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.009436 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.009463 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.009517 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.031439 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.037923 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.038072 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.038104 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.038137 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.038162 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.061296 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.067474 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.067621 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.067646 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.067676 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.067730 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.091962 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.097314 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.097365 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.097382 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.097409 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.097431 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.112734 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.120006 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.120056 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.120075 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.120114 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.120131 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.139603 4743 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0eaa79e4-cb03-4fb9-8d2d-8ffcbbe853e3\\\",\\\"systemUUID\\\":\\\"3d2e0a67-330f-4e1b-8e8f-608360b1d20e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.139837 4743 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.142156 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.142225 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.142244 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.142268 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.142284 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.246171 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.246244 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.246265 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.246297 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.246324 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.349695 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.349738 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.349748 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.349789 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.349801 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.453349 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.453424 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.453452 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.453531 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.453561 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.557720 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.557792 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.557811 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.557837 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.557855 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.661196 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.661259 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.661278 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.661304 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.661325 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.722039 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.722151 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.722254 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.722397 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.722582 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:18 crc kubenswrapper[4743]: E1123 00:08:18.722750 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.748696 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c343c8-4ea4-4695-badb-11c95f8fbbf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8f14afdc96611956d549aeefe391fa1bccebfb30f4215a9f8b0afb9e324ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f80bba9ce04f2669a072dd1cc72ff9a725a3e811ad0279491a89dbb12e8a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd11904b9348b9162a484c6f94b66baf136b526a943b0808a0ff83d52f26891\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5b2475f40fd912cff943b3a8ef3c05dcb6cb0718708733085cf5e4599e85f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb591c2a02a76b97bf589a828e2a87f99e9e190a4571b87b3209df8b9ccfe70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1123 00:07:04.708338 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 00:07:04.709282 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-362913789/tls.crt::/tmp/serving-cert-362913789/tls.key\\\\\\\"\\\\nI1123 00:07:10.100218 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 00:07:10.106422 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 00:07:10.106445 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 00:07:10.106464 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 00:07:10.106472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 00:07:10.111618 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1123 00:07:10.111647 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1123 00:07:10.111675 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111682 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 00:07:10.111687 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 00:07:10.111690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 00:07:10.111693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 00:07:10.111696 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1123 00:07:10.113786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7658202799cb611725f649f658e0f45af2268a6a6e82e3f0f9fb002c94a86c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0424d624e22f80d8ba657682139e799354f969e0cfac614d54bda863bcb289d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.765132 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.765222 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.765246 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.765281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.765304 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.771693 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0fc0650-484b-4b66-93ff-6cde78c60014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca353e5601b65d317e0fd795a0bb0e56ddb2290011b283f95c03ce5df1d3350c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c930e50d99d32c39e9056b22560c3acb1b067320f2306a29d76f56be4d0c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be9870e12d70e05293b10a5f35947e0fdc3bf7b22b1c6af4474872386798688\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb62bc6622f9eca734aa887725534479b86c8d4561e7cdb3f8cbb1276182c923\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.789579 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwqq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15024711-2e2a-406c-b47f-19b3dabc6202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4512699d26a29c770e77d70372d19ce616c1980afd4f341d45ca4724e6bd981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwzld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwqq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.807881 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24ea31d8-fd1d-4396-9b78-3058666d315a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcbqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8ddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.844961 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a326b5d-d1b5-4bf6-9d38-c039d3904611\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa53db5e48541274d532c593c3cf33aea41d9329aa7a25e35803ea2854274e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6810fc1c4651b2076cc9637261a24720d02d16dc57e72805aaedb25c13e8b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7329979bf15a7e36fbef2a46759df2324ac9cfb088a768192f44319cb7f3131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b272d18651f2645902da5305e7c223c5b282994977f347c4d9a4d39e7ceae26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://038ac58c80d07351f92865d326362026ca8ecc38c92dce1e4df44aa236a36223\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56358066a1a48d72736b08cadd4048dcd41b5c0d8f62aa3c1eadb3c3adc376e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56358066a1a48d72736b08cadd4048dcd41b5c0d8f62aa3c1eadb3c3adc376e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://916330324d3654ea5f620d89b9a427d5807653423e1bc66a0bc3d7c0ee52ef3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916330324d3654ea5f620d89b9a427d5807653423e1bc66a0bc3d7c0ee52ef3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://18ed5a5cefeb2682a9f6dfdb3eea4491c8f973ef64e61b1e4721ed1e4943da97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ed5a5cefeb2682a9f6dfdb3eea4491c8f973ef64e61b1e4721ed1e4943da97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.863093 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.868180 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.868427 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.868654 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.868850 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.869038 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.884265 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.903454 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kvwqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca2e6214-c5b2-4734-944c-efbf7e76ad99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eab9cd58248fcb2346beef7e4c956ea9ca25b87105d86a536c094c6e024f770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rrsbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kvwqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.935367 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:08:10Z\\\",\\\"message\\\":\\\"kube-scheduler-crc openshift-machine-config-operator/machine-config-daemon-cxtxv]\\\\nF1123 00:08:10.766926 6795 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:10Z is after 2025-08-24T17:21:41Z]\\\\nI1123 00:08:10.766936 6795 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 00:08:10.766959 6795 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-cxtxv\\\\nI1123 00:08:10.766957 6795 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:08:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v64gz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.954890 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a06a2136-201a-4824-b92e-7bdc103f811c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef02efa4ca540b5e54bcb6af7508e2ebdabd913317730414e3a158fa1a86c83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a012119af8f56e5cdbf4d8eaf0829d7606c39f5557e8dc2ac06224ddd965251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e322433434122580672175be614e7a7a6657e96784749fd7e39a8228a08fb55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597688fa387bd24a1665db5f1aa197256fa9213006c1019750859a4bb6e6b066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.972342 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.972405 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.972427 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.972454 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.972472 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:18Z","lastTransitionTime":"2025-11-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.973037 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350c32caf7926a95e32c2bde68eca06390c5289dcd88f96bbc56cb29d0b519e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:18 crc kubenswrapper[4743]: I1123 00:08:18.992192 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edcc45f2c15e268606f21518765e9003de078bea00c24aaf37ddb88add65d42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:18Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.011841 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.029458 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zvknx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0418df6-be6b-459c-8685-770bc9c99a0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T00:07:59Z\\\",\\\"message\\\":\\\"2025-11-23T00:07:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7\\\\n2025-11-23T00:07:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8f9ed96-023c-4a4b-9fbe-03638035adf7 to /host/opt/cni/bin/\\\\n2025-11-23T00:07:14Z [verbose] multus-daemon started\\\\n2025-11-23T00:07:14Z [verbose] Readiness Indicator file check\\\\n2025-11-23T00:07:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx4q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zvknx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.055623 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s4k55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef0a151b-b7ac-4e51-a464-c9e11f9a4ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14a74e944d9bc38935367ae011b0c6ff87b807a48ed19de2990b145e16f2da6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18a98c167636d764521df5ea21c68e534c9d2b808ab15fd6c3300e38805162e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b973137638406f45aaa2f595b97448a0acc41e77f54846a6f84799cc133e65cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e88370c4053ffc10a37219d98c7451899e5dcb7830e6b4c6f3d6e71c462a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c2da511594102e8389ab5a00c878ef081e3dd15410d91e8610406c132f177d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e57b7f7f292bbcf1eca895284d5ad4c90ea4d4d883a596ea83b54b17a2fa9ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16caec0229cefe9fa5f08d1ad3b80940ad957695b0d9ae4ed322a624071cdcc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wt7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s4k55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.072499 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dfeb5cb-3737-4c27-95ad-6d780b3d17dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e56c77b063825f42a5134699a4e67ab4bb0f3f48f7fa7521e091156c6f63504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27d827bd0b61c4e438ac12dac2904bb02a63e18da70ad9490143301eeabe0a76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.077042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.077107 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.077136 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.077168 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.077188 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.095073 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e6a42c52228cc30aa552c91f95d986a950aa4fb1eda85eb7bbd38c8472e2c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eeff804c76c7db265cb8425330be2101ddf50114fde113d8c9429c4e770d55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.108962 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbda6ee4-c567-4104-9c7a-ca01c6f9d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a20e300cef7c9e74a076cd930b0684c05c447e04f67ffdd07a6b1e6cd92479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84h6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cxtxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.125778 4743 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2354a4bd-98b1-489f-a4dc-562d4ce123ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f8c324530a78de211d2cba70d126d102792d2fb02a1c2f23ab4991fe203d164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff56dc364254fc22b26991b0f9a13308a5c8032451d97296d6deea07324ce03f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt59c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T00:07:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q8h4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T00:08:19Z is after 2025-08-24T17:21:41Z" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.180212 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.180252 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.180265 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.180286 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.180299 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.282528 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.282597 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.282613 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.282635 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.282650 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.384858 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.385329 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.385565 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.385728 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.385857 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.489811 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.490201 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.490271 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.490365 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.490446 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.593862 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.594666 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.594700 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.594722 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.594737 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.697106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.697149 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.697158 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.697173 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.697184 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.722010 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:19 crc kubenswrapper[4743]: E1123 00:08:19.722161 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.800676 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.800720 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.800736 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.800757 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.800774 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.903723 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.903795 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.903817 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.903847 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:19 crc kubenswrapper[4743]: I1123 00:08:19.903870 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:19Z","lastTransitionTime":"2025-11-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.007864 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.007941 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.007952 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.007991 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.008010 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.111665 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.111743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.111765 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.111795 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.111816 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.215449 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.215580 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.215599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.215627 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.215648 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.319537 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.319612 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.319647 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.319679 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.319701 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.423175 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.423245 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.423256 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.423274 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.423285 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.527163 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.527251 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.527278 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.527305 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.527323 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.631364 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.631446 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.631468 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.631543 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.631571 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.721946 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.721995 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:20 crc kubenswrapper[4743]: E1123 00:08:20.722169 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.722192 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:20 crc kubenswrapper[4743]: E1123 00:08:20.722319 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:20 crc kubenswrapper[4743]: E1123 00:08:20.722456 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.735046 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.735118 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.735143 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.735170 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.735188 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.839362 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.839449 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.839468 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.839553 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.839579 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.943596 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.943672 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.943691 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.943723 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:20 crc kubenswrapper[4743]: I1123 00:08:20.943750 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:20Z","lastTransitionTime":"2025-11-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.047565 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.047682 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.047703 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.047734 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.047752 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.151846 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.151925 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.151949 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.151981 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.152004 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.255407 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.255467 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.255489 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.255561 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.255582 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.358807 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.358893 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.358914 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.358940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.358960 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.462755 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.462836 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.462858 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.462887 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.462908 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.566238 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.566298 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.566316 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.566341 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.566360 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.670742 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.670830 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.670838 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.670856 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.670869 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.722243 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:21 crc kubenswrapper[4743]: E1123 00:08:21.722437 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.774013 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.774431 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.774453 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.774481 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.774540 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.878060 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.878139 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.878161 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.878246 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.878274 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.981653 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.981724 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.981743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.981773 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:21 crc kubenswrapper[4743]: I1123 00:08:21.981794 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:21Z","lastTransitionTime":"2025-11-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.086360 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.086466 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.086519 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.086550 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.086572 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.190843 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.190896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.190915 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.190940 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.190961 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.293723 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.293783 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.293799 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.293820 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.293835 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.396896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.396977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.397003 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.397038 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.397071 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.501066 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.501106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.501116 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.501133 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.501144 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.603936 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.603977 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.603989 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.604007 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.604021 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.706878 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.706937 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.706950 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.706968 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.706984 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.721738 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.721820 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:22 crc kubenswrapper[4743]: E1123 00:08:22.721903 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.721765 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:22 crc kubenswrapper[4743]: E1123 00:08:22.722312 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:22 crc kubenswrapper[4743]: E1123 00:08:22.722425 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.809605 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.809691 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.809712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.809743 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.809765 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.912774 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.912852 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.912870 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.912896 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:22 crc kubenswrapper[4743]: I1123 00:08:22.912916 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:22Z","lastTransitionTime":"2025-11-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.015095 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.015168 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.015180 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.015202 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.015217 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.118441 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.118500 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.118514 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.118538 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.118552 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.221574 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.221660 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.221678 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.221707 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.221727 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.325599 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.325881 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.325895 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.325916 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.325931 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.429027 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.429106 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.429124 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.429151 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.429173 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.532706 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.532786 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.532807 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.532841 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.532864 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.636624 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.636676 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.636688 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.636717 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.636730 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.721851 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:23 crc kubenswrapper[4743]: E1123 00:08:23.722264 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.739849 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.739915 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.739932 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.739952 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.739964 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.842139 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.842215 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.842237 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.842267 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.842290 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.945544 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.945601 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.945615 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.945633 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:23 crc kubenswrapper[4743]: I1123 00:08:23.945647 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:23Z","lastTransitionTime":"2025-11-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.049210 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.049263 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.049281 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.049305 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.049322 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.152979 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.153042 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.153055 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.153076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.153089 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.257057 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.257124 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.257141 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.257171 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.257195 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.361111 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.361207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.361234 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.361267 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.361337 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.464391 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.464441 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.464453 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.464476 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.464522 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.567448 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.567538 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.567555 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.567579 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.567598 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.670536 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.671205 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.671222 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.671248 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.671274 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.722094 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.722135 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.722226 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:24 crc kubenswrapper[4743]: E1123 00:08:24.722420 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:24 crc kubenswrapper[4743]: E1123 00:08:24.722643 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:24 crc kubenswrapper[4743]: E1123 00:08:24.722803 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.774262 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.774333 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.774357 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.774394 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.774421 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.877737 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.877833 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.877855 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.877889 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.877913 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.981123 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.981192 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.981210 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.981238 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:24 crc kubenswrapper[4743]: I1123 00:08:24.981256 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:24Z","lastTransitionTime":"2025-11-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.085723 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.085785 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.085804 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.085833 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.085854 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.189122 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.189191 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.189211 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.189239 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.189261 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.292928 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.293007 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.293032 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.293063 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.293085 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.396395 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.396528 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.396548 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.396578 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.396597 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.499958 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.500037 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.500055 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.500084 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.500103 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.602376 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.602432 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.602446 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.602467 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.602482 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.705936 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.706031 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.706059 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.706094 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.706122 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.721595 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:25 crc kubenswrapper[4743]: E1123 00:08:25.722059 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.722312 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:08:25 crc kubenswrapper[4743]: E1123 00:08:25.722525 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.809386 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.809456 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.809469 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.809508 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.809523 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.911712 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.911779 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.911791 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.911831 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:25 crc kubenswrapper[4743]: I1123 00:08:25.911846 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:25Z","lastTransitionTime":"2025-11-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.016315 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.016395 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.016418 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.016447 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.016467 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.120832 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.120902 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.120913 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.120935 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.120952 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.224412 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.224515 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.224536 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.224564 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.224582 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.328096 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.328169 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.328182 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.328207 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.328224 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.431473 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.431562 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.431612 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.431641 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.431656 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.535351 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.535440 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.535457 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.535503 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.535523 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.638780 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.638899 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.638916 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.638938 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.638969 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.722322 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.722403 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:26 crc kubenswrapper[4743]: E1123 00:08:26.722690 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.722857 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:26 crc kubenswrapper[4743]: E1123 00:08:26.723075 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:26 crc kubenswrapper[4743]: E1123 00:08:26.724916 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.741139 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.741171 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.741184 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.741200 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.741214 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.844877 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.845461 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.845759 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.846044 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.846259 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.949026 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.949076 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.949085 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.949101 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:26 crc kubenswrapper[4743]: I1123 00:08:26.949113 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:26Z","lastTransitionTime":"2025-11-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.052107 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.052176 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.052196 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.052224 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.052245 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.155949 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.155996 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.156005 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.156025 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.156038 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.260371 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.260455 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.260474 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.260529 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.260549 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.363582 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.363650 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.363667 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.363694 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.363715 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.467093 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.467160 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.467187 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.467223 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.467245 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.571294 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.571377 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.571398 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.571430 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.571456 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.675063 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.675156 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.675191 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.675229 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.675252 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.721464 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:27 crc kubenswrapper[4743]: E1123 00:08:27.721761 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.779221 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.779339 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.779362 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.779419 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.779438 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.883388 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.883606 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.883629 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.883724 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.883743 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.987153 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.987217 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.987231 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.987250 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:27 crc kubenswrapper[4743]: I1123 00:08:27.987262 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:27Z","lastTransitionTime":"2025-11-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.090891 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.090968 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.090992 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.091025 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.091048 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:28Z","lastTransitionTime":"2025-11-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.179074 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.179139 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.179155 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.179180 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.179198 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:28Z","lastTransitionTime":"2025-11-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.211371 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.211413 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.211428 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.211448 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.211464 4743 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T00:08:28Z","lastTransitionTime":"2025-11-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.255412 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs"] Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.256208 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.259445 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.259876 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.260331 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.260429 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.305651 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.305622845 podStartE2EDuration="1m14.305622845s" podCreationTimestamp="2025-11-23 00:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.289788784 +0000 UTC m=+100.367886911" watchObservedRunningTime="2025-11-23 00:08:28.305622845 +0000 UTC m=+100.383721012" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.313181 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12102e11-ff59-404d-a25c-60749d0e53b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.313248 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/12102e11-ff59-404d-a25c-60749d0e53b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.313292 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12102e11-ff59-404d-a25c-60749d0e53b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.313353 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/12102e11-ff59-404d-a25c-60749d0e53b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.313390 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12102e11-ff59-404d-a25c-60749d0e53b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.322534 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vwqq6" podStartSLOduration=78.322525222 podStartE2EDuration="1m18.322525222s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.305603055 +0000 UTC m=+100.383701182" watchObservedRunningTime="2025-11-23 00:08:28.322525222 +0000 UTC m=+100.400623349" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.351892 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.351853046 podStartE2EDuration="1m18.351853046s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.349429646 +0000 UTC m=+100.427527843" watchObservedRunningTime="2025-11-23 00:08:28.351853046 +0000 UTC m=+100.429951173" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.401905 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kvwqd" podStartSLOduration=78.40186876 podStartE2EDuration="1m18.40186876s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.401115322 +0000 UTC m=+100.479213519" watchObservedRunningTime="2025-11-23 00:08:28.40186876 +0000 UTC m=+100.479966917" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.415234 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/12102e11-ff59-404d-a25c-60749d0e53b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.415306 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12102e11-ff59-404d-a25c-60749d0e53b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.415397 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12102e11-ff59-404d-a25c-60749d0e53b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.415436 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/12102e11-ff59-404d-a25c-60749d0e53b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.415517 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12102e11-ff59-404d-a25c-60749d0e53b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.415887 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/12102e11-ff59-404d-a25c-60749d0e53b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.416009 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/12102e11-ff59-404d-a25c-60749d0e53b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.417821 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12102e11-ff59-404d-a25c-60749d0e53b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.427685 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12102e11-ff59-404d-a25c-60749d0e53b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.434691 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12102e11-ff59-404d-a25c-60749d0e53b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z4njs\" (UID: \"12102e11-ff59-404d-a25c-60749d0e53b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.466376 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.466344292 podStartE2EDuration="15.466344292s" podCreationTimestamp="2025-11-23 00:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.466073975 +0000 UTC m=+100.544172142" watchObservedRunningTime="2025-11-23 00:08:28.466344292 +0000 UTC m=+100.544442459" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.541829 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zvknx" podStartSLOduration=78.541801084 podStartE2EDuration="1m18.541801084s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.540785179 +0000 UTC m=+100.618883326" watchObservedRunningTime="2025-11-23 00:08:28.541801084 +0000 UTC m=+100.619899221" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.583909 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.598128 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-s4k55" podStartSLOduration=78.598101373 podStartE2EDuration="1m18.598101373s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.570545183 +0000 UTC m=+100.648643390" watchObservedRunningTime="2025-11-23 00:08:28.598101373 +0000 UTC m=+100.676199500" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.598919 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.598912773 podStartE2EDuration="45.598912773s" podCreationTimestamp="2025-11-23 00:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.597748834 +0000 UTC m=+100.675846961" watchObservedRunningTime="2025-11-23 00:08:28.598912773 +0000 UTC m=+100.677010890" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.652675 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podStartSLOduration=78.652632999 podStartE2EDuration="1m18.652632999s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.632050781 +0000 UTC m=+100.710148918" watchObservedRunningTime="2025-11-23 00:08:28.652632999 +0000 UTC m=+100.730731156" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.652896 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q8h4t" podStartSLOduration=78.652884315 podStartE2EDuration="1m18.652884315s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.65025941 +0000 UTC m=+100.728357557" watchObservedRunningTime="2025-11-23 00:08:28.652884315 +0000 UTC m=+100.730982492" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.687780 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.687753916 podStartE2EDuration="27.687753916s" podCreationTimestamp="2025-11-23 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:28.687225133 +0000 UTC m=+100.765323260" watchObservedRunningTime="2025-11-23 00:08:28.687753916 +0000 UTC m=+100.765852043" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.721836 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:28 crc kubenswrapper[4743]: E1123 00:08:28.729141 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.729562 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:28 crc kubenswrapper[4743]: I1123 00:08:28.729621 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:28 crc kubenswrapper[4743]: E1123 00:08:28.729708 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:28 crc kubenswrapper[4743]: E1123 00:08:28.730007 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:29 crc kubenswrapper[4743]: I1123 00:08:29.408926 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" event={"ID":"12102e11-ff59-404d-a25c-60749d0e53b3","Type":"ContainerStarted","Data":"6a6778e329b347b21f32fa8a20fd122200e08c18db39f1e6a0699854875b6fea"} Nov 23 00:08:29 crc kubenswrapper[4743]: I1123 00:08:29.409008 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" event={"ID":"12102e11-ff59-404d-a25c-60749d0e53b3","Type":"ContainerStarted","Data":"6cb01591cedf029766655d64fe71af3d1b78162a28855356997b3333748b1f25"} Nov 23 00:08:29 crc kubenswrapper[4743]: I1123 00:08:29.721676 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:29 crc kubenswrapper[4743]: E1123 00:08:29.722019 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:30 crc kubenswrapper[4743]: I1123 00:08:30.539410 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:30 crc kubenswrapper[4743]: E1123 00:08:30.539752 4743 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:08:30 crc kubenswrapper[4743]: E1123 00:08:30.539914 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs podName:24ea31d8-fd1d-4396-9b78-3058666d315a nodeName:}" failed. No retries permitted until 2025-11-23 00:09:34.539864773 +0000 UTC m=+166.617963060 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs") pod "network-metrics-daemon-t8ddf" (UID: "24ea31d8-fd1d-4396-9b78-3058666d315a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 00:08:30 crc kubenswrapper[4743]: I1123 00:08:30.721547 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:30 crc kubenswrapper[4743]: I1123 00:08:30.721664 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:30 crc kubenswrapper[4743]: E1123 00:08:30.721834 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:30 crc kubenswrapper[4743]: I1123 00:08:30.721880 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:30 crc kubenswrapper[4743]: E1123 00:08:30.722092 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:30 crc kubenswrapper[4743]: E1123 00:08:30.722265 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:31 crc kubenswrapper[4743]: I1123 00:08:31.721982 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:31 crc kubenswrapper[4743]: E1123 00:08:31.722185 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:32 crc kubenswrapper[4743]: I1123 00:08:32.721255 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:32 crc kubenswrapper[4743]: I1123 00:08:32.721332 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:32 crc kubenswrapper[4743]: E1123 00:08:32.721867 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:32 crc kubenswrapper[4743]: I1123 00:08:32.721375 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:32 crc kubenswrapper[4743]: E1123 00:08:32.722059 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:32 crc kubenswrapper[4743]: E1123 00:08:32.722181 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:33 crc kubenswrapper[4743]: I1123 00:08:33.721893 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:33 crc kubenswrapper[4743]: E1123 00:08:33.722018 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:34 crc kubenswrapper[4743]: I1123 00:08:34.722087 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:34 crc kubenswrapper[4743]: I1123 00:08:34.722192 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:34 crc kubenswrapper[4743]: I1123 00:08:34.722360 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:34 crc kubenswrapper[4743]: E1123 00:08:34.722521 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:34 crc kubenswrapper[4743]: E1123 00:08:34.722576 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:34 crc kubenswrapper[4743]: E1123 00:08:34.722863 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:35 crc kubenswrapper[4743]: I1123 00:08:35.722220 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:35 crc kubenswrapper[4743]: E1123 00:08:35.723803 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:36 crc kubenswrapper[4743]: I1123 00:08:36.721691 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:36 crc kubenswrapper[4743]: I1123 00:08:36.721812 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:36 crc kubenswrapper[4743]: I1123 00:08:36.721707 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:36 crc kubenswrapper[4743]: E1123 00:08:36.721962 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:36 crc kubenswrapper[4743]: E1123 00:08:36.722177 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:36 crc kubenswrapper[4743]: E1123 00:08:36.722362 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:37 crc kubenswrapper[4743]: I1123 00:08:37.722163 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:37 crc kubenswrapper[4743]: E1123 00:08:37.722420 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:37 crc kubenswrapper[4743]: I1123 00:08:37.723566 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:08:37 crc kubenswrapper[4743]: E1123 00:08:37.723888 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:08:38 crc kubenswrapper[4743]: I1123 00:08:38.721945 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:38 crc kubenswrapper[4743]: I1123 00:08:38.723953 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:38 crc kubenswrapper[4743]: I1123 00:08:38.723860 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:38 crc kubenswrapper[4743]: E1123 00:08:38.724100 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:38 crc kubenswrapper[4743]: E1123 00:08:38.724305 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:38 crc kubenswrapper[4743]: E1123 00:08:38.724676 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:39 crc kubenswrapper[4743]: I1123 00:08:39.721716 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:39 crc kubenswrapper[4743]: E1123 00:08:39.721832 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:40 crc kubenswrapper[4743]: I1123 00:08:40.722376 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:40 crc kubenswrapper[4743]: I1123 00:08:40.722390 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:40 crc kubenswrapper[4743]: I1123 00:08:40.722766 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:40 crc kubenswrapper[4743]: E1123 00:08:40.722989 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:40 crc kubenswrapper[4743]: E1123 00:08:40.723102 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:40 crc kubenswrapper[4743]: E1123 00:08:40.723248 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:41 crc kubenswrapper[4743]: I1123 00:08:41.721752 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:41 crc kubenswrapper[4743]: E1123 00:08:41.722051 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:42 crc kubenswrapper[4743]: I1123 00:08:42.721591 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:42 crc kubenswrapper[4743]: I1123 00:08:42.721662 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:42 crc kubenswrapper[4743]: I1123 00:08:42.721610 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:42 crc kubenswrapper[4743]: E1123 00:08:42.721835 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:42 crc kubenswrapper[4743]: E1123 00:08:42.721971 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:42 crc kubenswrapper[4743]: E1123 00:08:42.722093 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:43 crc kubenswrapper[4743]: I1123 00:08:43.721749 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:43 crc kubenswrapper[4743]: E1123 00:08:43.721994 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:44 crc kubenswrapper[4743]: I1123 00:08:44.721932 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:44 crc kubenswrapper[4743]: E1123 00:08:44.722243 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:44 crc kubenswrapper[4743]: I1123 00:08:44.722732 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:44 crc kubenswrapper[4743]: I1123 00:08:44.722783 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:44 crc kubenswrapper[4743]: E1123 00:08:44.723029 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:44 crc kubenswrapper[4743]: E1123 00:08:44.723177 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:45 crc kubenswrapper[4743]: I1123 00:08:45.721867 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:45 crc kubenswrapper[4743]: E1123 00:08:45.722045 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.474798 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/1.log" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.475309 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/0.log" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.475378 4743 generic.go:334] "Generic (PLEG): container finished" podID="b0418df6-be6b-459c-8685-770bc9c99a0e" containerID="a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b" exitCode=1 Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.475421 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerDied","Data":"a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b"} Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.475470 4743 scope.go:117] "RemoveContainer" containerID="c7d295c47942d070a4ee37f5d35900b7e361b65d24ec3e035e28d4b1f2f2d5ba" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.476417 4743 scope.go:117] "RemoveContainer" containerID="a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b" Nov 23 00:08:46 crc kubenswrapper[4743]: E1123 00:08:46.476809 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zvknx_openshift-multus(b0418df6-be6b-459c-8685-770bc9c99a0e)\"" pod="openshift-multus/multus-zvknx" podUID="b0418df6-be6b-459c-8685-770bc9c99a0e" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.499883 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z4njs" podStartSLOduration=96.499846356 podStartE2EDuration="1m36.499846356s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:08:29.441285192 +0000 UTC m=+101.519383349" watchObservedRunningTime="2025-11-23 00:08:46.499846356 +0000 UTC m=+118.577944523" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.722115 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.722152 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:46 crc kubenswrapper[4743]: I1123 00:08:46.722128 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:46 crc kubenswrapper[4743]: E1123 00:08:46.722334 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:46 crc kubenswrapper[4743]: E1123 00:08:46.722586 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:46 crc kubenswrapper[4743]: E1123 00:08:46.722779 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:47 crc kubenswrapper[4743]: I1123 00:08:47.481793 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/1.log" Nov 23 00:08:47 crc kubenswrapper[4743]: I1123 00:08:47.721970 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:47 crc kubenswrapper[4743]: E1123 00:08:47.722175 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:48 crc kubenswrapper[4743]: I1123 00:08:48.722211 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:48 crc kubenswrapper[4743]: I1123 00:08:48.722216 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:48 crc kubenswrapper[4743]: I1123 00:08:48.722392 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:48 crc kubenswrapper[4743]: E1123 00:08:48.724062 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:48 crc kubenswrapper[4743]: E1123 00:08:48.724205 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:48 crc kubenswrapper[4743]: E1123 00:08:48.724336 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:48 crc kubenswrapper[4743]: E1123 00:08:48.738143 4743 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 23 00:08:48 crc kubenswrapper[4743]: E1123 00:08:48.841563 4743 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 00:08:49 crc kubenswrapper[4743]: I1123 00:08:49.721831 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:49 crc kubenswrapper[4743]: E1123 00:08:49.723477 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:49 crc kubenswrapper[4743]: I1123 00:08:49.724325 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:08:49 crc kubenswrapper[4743]: E1123 00:08:49.724944 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v64gz_openshift-ovn-kubernetes(94c14c61-ccab-4ff7-abcd-91276e4ba6ab)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" Nov 23 00:08:50 crc kubenswrapper[4743]: I1123 00:08:50.721961 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:50 crc kubenswrapper[4743]: I1123 00:08:50.722037 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:50 crc kubenswrapper[4743]: I1123 00:08:50.722078 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:50 crc kubenswrapper[4743]: E1123 00:08:50.722149 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:50 crc kubenswrapper[4743]: E1123 00:08:50.722258 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:50 crc kubenswrapper[4743]: E1123 00:08:50.722407 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:51 crc kubenswrapper[4743]: I1123 00:08:51.721224 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:51 crc kubenswrapper[4743]: E1123 00:08:51.721588 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:52 crc kubenswrapper[4743]: I1123 00:08:52.722096 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:52 crc kubenswrapper[4743]: I1123 00:08:52.722193 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:52 crc kubenswrapper[4743]: E1123 00:08:52.722325 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:52 crc kubenswrapper[4743]: E1123 00:08:52.722449 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:52 crc kubenswrapper[4743]: I1123 00:08:52.722679 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:52 crc kubenswrapper[4743]: E1123 00:08:52.722932 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:53 crc kubenswrapper[4743]: I1123 00:08:53.734733 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:53 crc kubenswrapper[4743]: E1123 00:08:53.735053 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:53 crc kubenswrapper[4743]: E1123 00:08:53.843571 4743 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 00:08:54 crc kubenswrapper[4743]: I1123 00:08:54.721532 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:54 crc kubenswrapper[4743]: I1123 00:08:54.721629 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:54 crc kubenswrapper[4743]: I1123 00:08:54.721642 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:54 crc kubenswrapper[4743]: E1123 00:08:54.722140 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:54 crc kubenswrapper[4743]: E1123 00:08:54.722312 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:54 crc kubenswrapper[4743]: E1123 00:08:54.722612 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:55 crc kubenswrapper[4743]: I1123 00:08:55.721771 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:55 crc kubenswrapper[4743]: E1123 00:08:55.722089 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:56 crc kubenswrapper[4743]: I1123 00:08:56.721763 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:56 crc kubenswrapper[4743]: I1123 00:08:56.721900 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:56 crc kubenswrapper[4743]: E1123 00:08:56.721999 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:56 crc kubenswrapper[4743]: E1123 00:08:56.722136 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:56 crc kubenswrapper[4743]: I1123 00:08:56.722625 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:56 crc kubenswrapper[4743]: E1123 00:08:56.722783 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:57 crc kubenswrapper[4743]: I1123 00:08:57.721599 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:57 crc kubenswrapper[4743]: E1123 00:08:57.722008 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:08:58 crc kubenswrapper[4743]: I1123 00:08:58.721554 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:08:58 crc kubenswrapper[4743]: I1123 00:08:58.721554 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:08:58 crc kubenswrapper[4743]: E1123 00:08:58.723463 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:08:58 crc kubenswrapper[4743]: I1123 00:08:58.723565 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:08:58 crc kubenswrapper[4743]: E1123 00:08:58.723756 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:08:58 crc kubenswrapper[4743]: E1123 00:08:58.723916 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:08:58 crc kubenswrapper[4743]: E1123 00:08:58.844339 4743 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 00:08:59 crc kubenswrapper[4743]: I1123 00:08:59.721870 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:08:59 crc kubenswrapper[4743]: E1123 00:08:59.722118 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:09:00 crc kubenswrapper[4743]: I1123 00:09:00.721857 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:00 crc kubenswrapper[4743]: E1123 00:09:00.722103 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:09:00 crc kubenswrapper[4743]: I1123 00:09:00.723006 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:00 crc kubenswrapper[4743]: I1123 00:09:00.723005 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:00 crc kubenswrapper[4743]: E1123 00:09:00.723315 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:09:00 crc kubenswrapper[4743]: I1123 00:09:00.723572 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:09:00 crc kubenswrapper[4743]: E1123 00:09:00.723676 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:09:00 crc kubenswrapper[4743]: I1123 00:09:00.724032 4743 scope.go:117] "RemoveContainer" containerID="a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b" Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.539841 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/3.log" Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.543874 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerStarted","Data":"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38"} Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.544951 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.546720 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/1.log" Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.546758 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerStarted","Data":"bf998bc8e291a5c2248c56a257bd7070096af13d4ef62133ec4ae33e687b20dd"} Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.596559 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podStartSLOduration=111.596540035 podStartE2EDuration="1m51.596540035s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:01.595923581 +0000 UTC m=+133.674021728" watchObservedRunningTime="2025-11-23 00:09:01.596540035 +0000 UTC m=+133.674638162" Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.680040 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-t8ddf"] Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.680209 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:01 crc kubenswrapper[4743]: E1123 00:09:01.680380 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:09:01 crc kubenswrapper[4743]: I1123 00:09:01.722043 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:01 crc kubenswrapper[4743]: E1123 00:09:01.722252 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:09:02 crc kubenswrapper[4743]: I1123 00:09:02.722208 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:02 crc kubenswrapper[4743]: I1123 00:09:02.722208 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:02 crc kubenswrapper[4743]: E1123 00:09:02.722467 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:09:02 crc kubenswrapper[4743]: E1123 00:09:02.722598 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:09:03 crc kubenswrapper[4743]: I1123 00:09:03.721283 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:03 crc kubenswrapper[4743]: I1123 00:09:03.721394 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:03 crc kubenswrapper[4743]: E1123 00:09:03.721568 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:09:03 crc kubenswrapper[4743]: E1123 00:09:03.721659 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:09:03 crc kubenswrapper[4743]: E1123 00:09:03.845855 4743 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 00:09:04 crc kubenswrapper[4743]: I1123 00:09:04.721274 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:04 crc kubenswrapper[4743]: I1123 00:09:04.721352 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:04 crc kubenswrapper[4743]: E1123 00:09:04.721525 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:09:04 crc kubenswrapper[4743]: E1123 00:09:04.721596 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:09:05 crc kubenswrapper[4743]: I1123 00:09:05.721654 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:05 crc kubenswrapper[4743]: I1123 00:09:05.721654 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:05 crc kubenswrapper[4743]: E1123 00:09:05.721863 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:09:05 crc kubenswrapper[4743]: E1123 00:09:05.721884 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:09:06 crc kubenswrapper[4743]: I1123 00:09:06.721904 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:06 crc kubenswrapper[4743]: I1123 00:09:06.722002 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:06 crc kubenswrapper[4743]: E1123 00:09:06.722160 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:09:06 crc kubenswrapper[4743]: E1123 00:09:06.722407 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:09:07 crc kubenswrapper[4743]: I1123 00:09:07.721237 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:07 crc kubenswrapper[4743]: I1123 00:09:07.721265 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:07 crc kubenswrapper[4743]: E1123 00:09:07.721444 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8ddf" podUID="24ea31d8-fd1d-4396-9b78-3058666d315a" Nov 23 00:09:07 crc kubenswrapper[4743]: E1123 00:09:07.721602 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 00:09:08 crc kubenswrapper[4743]: I1123 00:09:08.722097 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:08 crc kubenswrapper[4743]: E1123 00:09:08.723321 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 00:09:08 crc kubenswrapper[4743]: I1123 00:09:08.723383 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:08 crc kubenswrapper[4743]: E1123 00:09:08.723612 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.486579 4743 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.525864 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7mnv"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.526419 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-njxkk"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.526719 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pqx6"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.526975 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.527249 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.527742 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.528193 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.528666 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.536108 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.536914 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.537252 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.537423 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.537653 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.537941 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.539393 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.539901 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.540291 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.540392 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.544870 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.544892 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.544902 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545212 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545237 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545268 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545298 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545369 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545390 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545423 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545462 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545477 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545559 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545565 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545394 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545704 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545218 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.545874 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.546115 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.546213 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.546283 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.546313 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.546427 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.546447 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.547769 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.547835 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.547848 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.548374 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgddj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.548908 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.549370 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.550123 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.551577 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.552135 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.552755 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29397600-gj2zp"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.553034 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.553381 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.553533 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.554185 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.555906 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.557126 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.557360 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.557535 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.557785 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.557938 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jjr84"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.558566 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rdgvc"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.558894 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.566003 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.566307 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.568333 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.575229 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.576164 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4xqck"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.576214 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.576214 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.595138 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.596173 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dfr5p"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.596687 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.597544 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.597580 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.598132 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xgh54"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.598627 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.598749 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.598902 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599010 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599015 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599377 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599412 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599498 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599619 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599632 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.599966 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600023 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600098 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600305 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600537 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600767 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600908 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.600991 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601100 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601152 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601243 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601277 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601363 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601374 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601587 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601902 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.601977 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.602054 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.602116 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.602350 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.602735 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.604025 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.604289 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.606299 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mdb2v"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.606754 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-k4dzd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.607033 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29397600-gj2zp"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.607122 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.607384 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.607542 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.610856 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.612029 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.613609 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.617034 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.635281 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.636063 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t4jq5"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.636661 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.637430 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.637454 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sqgm6"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.638123 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.638551 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.638789 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.639001 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.639240 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.643028 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.645885 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.667214 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.681961 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-image-import-ca\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682017 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-audit-policies\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682047 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-audit\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682069 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682090 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682118 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682144 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-kube-api-access-4ggdv\") pod \"image-pruner-29397600-gj2zp\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682169 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-config\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682188 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682224 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-etcd-client\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682242 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10760937-904d-4004-837d-66e5e3dfe95f-serving-cert\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682261 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682282 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682301 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-client-ca\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682335 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-config\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682370 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-config\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682397 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-machine-approver-tls\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682420 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a67a24-9440-4409-98d2-3ddbc8dda335-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682456 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/203f4e5b-490a-43cb-90db-8beed3234d54-audit-dir\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682479 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8h5\" (UniqueName: \"kubernetes.io/projected/e6309be1-c0c3-4a38-9770-85295aec41ae-kube-api-access-gf8h5\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682522 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-serving-cert\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682546 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-service-ca\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682570 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36d9643-d39a-480a-8caa-2a318102ef5b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682591 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42f92a20-3051-4cc0-861c-5b6a58753aaf-audit-dir\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682616 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thz5x\" (UniqueName: \"kubernetes.io/projected/a529fd56-b206-4ec0-984e-addbd17374ee-kube-api-access-thz5x\") pod \"downloads-7954f5f757-dfr5p\" (UID: \"a529fd56-b206-4ec0-984e-addbd17374ee\") " pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682638 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39201fe-fa08-49ca-adec-15441d9cbaa5-serving-cert\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682664 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-auth-proxy-config\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682685 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a67a24-9440-4409-98d2-3ddbc8dda335-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682708 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvnjf\" (UniqueName: \"kubernetes.io/projected/d39201fe-fa08-49ca-adec-15441d9cbaa5-kube-api-access-hvnjf\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682732 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzc6v\" (UniqueName: \"kubernetes.io/projected/100014ec-26b2-4311-82fd-41fa1228c011-kube-api-access-vzc6v\") pod \"cluster-samples-operator-665b6dd947-vl776\" (UID: \"100014ec-26b2-4311-82fd-41fa1228c011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682771 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfgf\" (UniqueName: \"kubernetes.io/projected/a1a67a24-9440-4409-98d2-3ddbc8dda335-kube-api-access-rdfgf\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682795 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-encryption-config\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682819 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-images\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682848 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10760937-904d-4004-837d-66e5e3dfe95f-config\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682872 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-encryption-config\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682894 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682927 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfznb\" (UniqueName: \"kubernetes.io/projected/203f4e5b-490a-43cb-90db-8beed3234d54-kube-api-access-rfznb\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682948 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682971 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.682991 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-serviceca\") pod \"image-pruner-29397600-gj2zp\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683013 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6309be1-c0c3-4a38-9770-85295aec41ae-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683038 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/100014ec-26b2-4311-82fd-41fa1228c011-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vl776\" (UID: \"100014ec-26b2-4311-82fd-41fa1228c011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683062 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lp2v\" (UniqueName: \"kubernetes.io/projected/6e63d320-241c-4f1e-ace2-6b28a8d9d338-kube-api-access-2lp2v\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683084 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683107 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6mq\" (UniqueName: \"kubernetes.io/projected/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-kube-api-access-mf6mq\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683130 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-etcd-client\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683155 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-client\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683180 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6309be1-c0c3-4a38-9770-85295aec41ae-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683205 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683229 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683252 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-client-ca\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683274 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-dir\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683297 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683321 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-serving-cert\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683347 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-policies\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683371 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-config\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683398 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rqq5\" (UniqueName: \"kubernetes.io/projected/c9260cd3-3e10-47fe-b6f9-806bc90621fd-kube-api-access-4rqq5\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683423 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88p5\" (UniqueName: \"kubernetes.io/projected/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-kube-api-access-f88p5\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683726 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683760 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-serving-cert\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683833 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683866 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6309be1-c0c3-4a38-9770-85295aec41ae-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683909 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-ca\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683937 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-config\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683962 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.683992 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhcnz\" (UniqueName: \"kubernetes.io/projected/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-kube-api-access-bhcnz\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684015 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rqp\" (UniqueName: \"kubernetes.io/projected/42f92a20-3051-4cc0-861c-5b6a58753aaf-kube-api-access-j5rqp\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684074 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684109 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rgd\" (UniqueName: \"kubernetes.io/projected/10760937-904d-4004-837d-66e5e3dfe95f-kube-api-access-22rgd\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684147 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e63d320-241c-4f1e-ace2-6b28a8d9d338-serving-cert\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684176 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684205 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-config\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684239 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/203f4e5b-490a-43cb-90db-8beed3234d54-node-pullsecrets\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684268 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684290 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxwk\" (UniqueName: \"kubernetes.io/projected/d36d9643-d39a-480a-8caa-2a318102ef5b-kube-api-access-hhxwk\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684319 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684360 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-config\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684389 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrwp\" (UniqueName: \"kubernetes.io/projected/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-kube-api-access-dxrwp\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684413 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d36d9643-d39a-480a-8caa-2a318102ef5b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.684442 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-serving-cert\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.685471 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.687122 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgddj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.688540 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10760937-904d-4004-837d-66e5e3dfe95f-trusted-ca\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.688853 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7mnv"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.689193 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.690265 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.690763 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.690828 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.690909 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.690980 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.698026 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.698857 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.698996 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.699130 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.699347 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.699648 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.699657 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.700008 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.700059 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.700249 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.700289 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.700010 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.700411 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.701148 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.701163 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.701304 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.705661 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.706236 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.706710 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.706912 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.708817 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rr7k6"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.709687 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.710261 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.710544 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.710645 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.711269 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.721993 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.722128 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.725336 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.726566 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.727395 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.727769 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-64f58"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.728759 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.729417 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.730386 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.730998 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.732544 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.733309 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.736884 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4xqck"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.738765 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.740375 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.745693 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-q85t8"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.746360 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.748518 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.750163 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.751340 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.752199 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.752546 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.756425 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.757590 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.757944 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x99kl"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.758513 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.759026 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pc54n"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.759740 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.763089 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.768902 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.769125 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.783445 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.789676 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.789766 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t4jq5"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795191 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795264 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rgd\" (UniqueName: \"kubernetes.io/projected/10760937-904d-4004-837d-66e5e3dfe95f-kube-api-access-22rgd\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795298 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e63d320-241c-4f1e-ace2-6b28a8d9d338-serving-cert\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795333 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-service-ca\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795359 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795575 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-trusted-ca-bundle\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795620 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/203f4e5b-490a-43cb-90db-8beed3234d54-node-pullsecrets\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795645 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795679 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxwk\" (UniqueName: \"kubernetes.io/projected/d36d9643-d39a-480a-8caa-2a318102ef5b-kube-api-access-hhxwk\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795704 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-config\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795740 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-config\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795764 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrwp\" (UniqueName: \"kubernetes.io/projected/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-kube-api-access-dxrwp\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795789 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795813 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3672dfeb-7ed6-4281-bd84-7588c3df430a-config\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795846 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10760937-904d-4004-837d-66e5e3dfe95f-trusted-ca\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795876 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d36d9643-d39a-480a-8caa-2a318102ef5b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795903 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-serving-cert\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.795995 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7de29f4-885a-469d-843e-3762c81f5379-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796028 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3672dfeb-7ed6-4281-bd84-7588c3df430a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796058 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-image-import-ca\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796079 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-audit\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796102 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796130 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796152 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796179 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-audit-policies\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796247 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-kube-api-access-4ggdv\") pod \"image-pruner-29397600-gj2zp\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796391 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-etcd-client\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796449 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10760937-904d-4004-837d-66e5e3dfe95f-serving-cert\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796525 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-config\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796561 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796641 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796693 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796715 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-client-ca\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796814 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-config\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.796897 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-config\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797088 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-console-config\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797134 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-machine-approver-tls\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797219 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a67a24-9440-4409-98d2-3ddbc8dda335-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797247 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpd8\" (UniqueName: \"kubernetes.io/projected/90e048af-50eb-4557-83e1-e19979685ded-kube-api-access-fdpd8\") pod \"package-server-manager-789f6589d5-fx5d9\" (UID: \"90e048af-50eb-4557-83e1-e19979685ded\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797287 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797820 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-serving-cert\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797842 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-service-ca\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.797961 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/203f4e5b-490a-43cb-90db-8beed3234d54-audit-dir\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798001 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8h5\" (UniqueName: \"kubernetes.io/projected/e6309be1-c0c3-4a38-9770-85295aec41ae-kube-api-access-gf8h5\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798066 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fccac410-c6c3-454f-938c-64beeb04e317-console-oauth-config\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798086 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36d9643-d39a-480a-8caa-2a318102ef5b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798107 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fccac410-c6c3-454f-938c-64beeb04e317-console-serving-cert\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798134 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3672dfeb-7ed6-4281-bd84-7588c3df430a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798160 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-auth-proxy-config\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798196 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a67a24-9440-4409-98d2-3ddbc8dda335-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798241 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42f92a20-3051-4cc0-861c-5b6a58753aaf-audit-dir\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798295 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thz5x\" (UniqueName: \"kubernetes.io/projected/a529fd56-b206-4ec0-984e-addbd17374ee-kube-api-access-thz5x\") pod \"downloads-7954f5f757-dfr5p\" (UID: \"a529fd56-b206-4ec0-984e-addbd17374ee\") " pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798315 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39201fe-fa08-49ca-adec-15441d9cbaa5-serving-cert\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798345 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvnjf\" (UniqueName: \"kubernetes.io/projected/d39201fe-fa08-49ca-adec-15441d9cbaa5-kube-api-access-hvnjf\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798400 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzc6v\" (UniqueName: \"kubernetes.io/projected/100014ec-26b2-4311-82fd-41fa1228c011-kube-api-access-vzc6v\") pod \"cluster-samples-operator-665b6dd947-vl776\" (UID: \"100014ec-26b2-4311-82fd-41fa1228c011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798432 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798465 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfgf\" (UniqueName: \"kubernetes.io/projected/a1a67a24-9440-4409-98d2-3ddbc8dda335-kube-api-access-rdfgf\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798509 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-encryption-config\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798850 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-config\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.799529 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/90e048af-50eb-4557-83e1-e19979685ded-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fx5d9\" (UID: \"90e048af-50eb-4557-83e1-e19979685ded\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.799870 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10760937-904d-4004-837d-66e5e3dfe95f-config\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.799910 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-images\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.799964 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-oauth-serving-cert\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.800376 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.800429 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dfr5p"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.801101 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.798237 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-config\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.801798 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.802342 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10760937-904d-4004-837d-66e5e3dfe95f-trusted-ca\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.802458 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.802556 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e63d320-241c-4f1e-ace2-6b28a8d9d338-serving-cert\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.802626 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/203f4e5b-490a-43cb-90db-8beed3234d54-node-pullsecrets\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.804982 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d36d9643-d39a-480a-8caa-2a318102ef5b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.805169 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xgh54"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.805690 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-image-import-ca\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806284 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-encryption-config\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806368 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfznb\" (UniqueName: \"kubernetes.io/projected/203f4e5b-490a-43cb-90db-8beed3234d54-kube-api-access-rfznb\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806392 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806415 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6309be1-c0c3-4a38-9770-85295aec41ae-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806462 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806495 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806617 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-serviceca\") pod \"image-pruner-29397600-gj2zp\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806657 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/100014ec-26b2-4311-82fd-41fa1228c011-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vl776\" (UID: \"100014ec-26b2-4311-82fd-41fa1228c011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806706 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.806741 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf6mq\" (UniqueName: \"kubernetes.io/projected/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-kube-api-access-mf6mq\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.807010 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lp2v\" (UniqueName: \"kubernetes.io/projected/6e63d320-241c-4f1e-ace2-6b28a8d9d338-kube-api-access-2lp2v\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.807059 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-client\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.807091 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6309be1-c0c3-4a38-9770-85295aec41ae-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.808559 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-service-ca\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.808699 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/203f4e5b-490a-43cb-90db-8beed3234d54-audit-dir\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.809031 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.809280 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36d9643-d39a-480a-8caa-2a318102ef5b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.809740 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-encryption-config\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.809859 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-client\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.810240 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-auth-proxy-config\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.810640 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811221 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811260 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-etcd-client\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811269 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a67a24-9440-4409-98d2-3ddbc8dda335-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811286 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-dir\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811316 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42f92a20-3051-4cc0-861c-5b6a58753aaf-audit-dir\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811330 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.811365 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-serving-cert\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.812641 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-dir\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.812782 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6309be1-c0c3-4a38-9770-85295aec41ae-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.813110 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.813864 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-encryption-config\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.814186 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-images\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.814238 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.814240 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-config\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.815379 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-audit\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.815437 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-serving-cert\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.815641 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-config\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.815780 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.816179 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42f92a20-3051-4cc0-861c-5b6a58753aaf-audit-policies\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.816252 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-machine-approver-tls\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.816414 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-config\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.816926 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.818200 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.819645 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.820105 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.820150 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.820674 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-etcd-client\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.823169 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jjr84"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.824994 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829402 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829610 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-client-ca\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829725 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7de29f4-885a-469d-843e-3762c81f5379-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829850 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39201fe-fa08-49ca-adec-15441d9cbaa5-serving-cert\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829861 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-client-ca\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829903 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-policies\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829940 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdm7m\" (UniqueName: \"kubernetes.io/projected/fccac410-c6c3-454f-938c-64beeb04e317-kube-api-access-jdm7m\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.829963 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7de29f4-885a-469d-843e-3762c81f5379-config\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830039 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-config\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830058 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rqq5\" (UniqueName: \"kubernetes.io/projected/c9260cd3-3e10-47fe-b6f9-806bc90621fd-kube-api-access-4rqq5\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830076 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88p5\" (UniqueName: \"kubernetes.io/projected/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-kube-api-access-f88p5\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830097 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830120 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-ca\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830139 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-config\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830339 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-serving-cert\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830435 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832158 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-config\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830678 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832200 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-serving-cert\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832358 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6309be1-c0c3-4a38-9770-85295aec41ae-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832447 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830567 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-client-ca\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831222 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-policies\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831580 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/100014ec-26b2-4311-82fd-41fa1228c011-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vl776\" (UID: \"100014ec-26b2-4311-82fd-41fa1228c011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831821 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831827 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-etcd-ca\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831932 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6309be1-c0c3-4a38-9770-85295aec41ae-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832617 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-serving-cert\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832546 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhcnz\" (UniqueName: \"kubernetes.io/projected/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-kube-api-access-bhcnz\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.815032 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10760937-904d-4004-837d-66e5e3dfe95f-config\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.830850 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831063 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-serviceca\") pod \"image-pruner-29397600-gj2zp\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.831222 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-config\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.832890 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k4dzd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.825632 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a67a24-9440-4409-98d2-3ddbc8dda335-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.833274 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5rqp\" (UniqueName: \"kubernetes.io/projected/42f92a20-3051-4cc0-861c-5b6a58753aaf-kube-api-access-j5rqp\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.833322 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42f92a20-3051-4cc0-861c-5b6a58753aaf-etcd-client\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.833852 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10760937-904d-4004-837d-66e5e3dfe95f-serving-cert\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.834387 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203f4e5b-490a-43cb-90db-8beed3234d54-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.834681 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.846051 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-64f58"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.846558 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.846729 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.846758 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/203f4e5b-490a-43cb-90db-8beed3234d54-serving-cert\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.848569 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.851341 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pqx6"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.851644 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.853612 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.854989 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.856129 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rr7k6"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.857347 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mdb2v"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.858626 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.870701 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.887579 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-njxkk"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.889681 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.890109 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.890187 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rdgvc"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.891255 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-g77gl"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.892597 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.892710 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.894008 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.895069 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.897913 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g77gl"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.898024 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-q85t8"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.901335 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.901360 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x99kl"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.903146 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-v7tpd"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.903883 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.904383 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.906762 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.908364 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.909355 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.909820 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.910987 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pc54n"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.912193 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l6msj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.913724 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l6msj"] Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.913835 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.929821 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934668 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-service-ca\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934711 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-trusted-ca-bundle\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934758 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3672dfeb-7ed6-4281-bd84-7588c3df430a-config\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934781 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7de29f4-885a-469d-843e-3762c81f5379-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934807 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3672dfeb-7ed6-4281-bd84-7588c3df430a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934878 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-console-config\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934904 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpd8\" (UniqueName: \"kubernetes.io/projected/90e048af-50eb-4557-83e1-e19979685ded-kube-api-access-fdpd8\") pod \"package-server-manager-789f6589d5-fx5d9\" (UID: \"90e048af-50eb-4557-83e1-e19979685ded\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934932 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.934967 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fccac410-c6c3-454f-938c-64beeb04e317-console-oauth-config\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935004 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fccac410-c6c3-454f-938c-64beeb04e317-console-serving-cert\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935023 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3672dfeb-7ed6-4281-bd84-7588c3df430a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935069 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935104 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/90e048af-50eb-4557-83e1-e19979685ded-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fx5d9\" (UID: \"90e048af-50eb-4557-83e1-e19979685ded\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935144 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-oauth-serving-cert\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935220 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7de29f4-885a-469d-843e-3762c81f5379-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935257 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdm7m\" (UniqueName: \"kubernetes.io/projected/fccac410-c6c3-454f-938c-64beeb04e317-kube-api-access-jdm7m\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935295 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7de29f4-885a-469d-843e-3762c81f5379-config\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.935365 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.936028 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-service-ca\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.936236 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-trusted-ca-bundle\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.937117 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-oauth-serving-cert\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.937447 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fccac410-c6c3-454f-938c-64beeb04e317-console-config\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.938978 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fccac410-c6c3-454f-938c-64beeb04e317-console-serving-cert\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.940076 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fccac410-c6c3-454f-938c-64beeb04e317-console-oauth-config\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.949916 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.969741 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.990124 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 23 00:09:09 crc kubenswrapper[4743]: I1123 00:09:09.995893 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3672dfeb-7ed6-4281-bd84-7588c3df430a-config\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.009433 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.029725 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.049348 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.070234 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.089367 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.108846 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.129274 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.149189 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.169425 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.189293 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.199619 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.209747 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.216237 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.229441 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.250027 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.269012 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.288997 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.302268 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3672dfeb-7ed6-4281-bd84-7588c3df430a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.310008 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.329334 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.349154 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.369122 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.389300 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.400728 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7de29f4-885a-469d-843e-3762c81f5379-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.409601 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.417135 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7de29f4-885a-469d-843e-3762c81f5379-config\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.429780 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.450119 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.472941 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.498186 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.510595 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.529722 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.550399 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.570025 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.589198 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.609323 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.619913 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/90e048af-50eb-4557-83e1-e19979685ded-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fx5d9\" (UID: \"90e048af-50eb-4557-83e1-e19979685ded\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.630039 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.669131 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.689566 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.709124 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.721935 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.722532 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.727195 4743 request.go:700] Waited for 1.015682049s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&limit=500&resourceVersion=0 Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.729082 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.750073 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.770286 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.796316 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.809805 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.830358 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.849187 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.868815 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.888572 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.909273 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.949743 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.969670 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 23 00:09:10 crc kubenswrapper[4743]: I1123 00:09:10.990520 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.010428 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.029244 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.049231 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.069574 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.089592 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.109654 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.130391 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.149737 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.170172 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.189788 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.210205 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.229445 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.294592 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.298127 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.298413 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.310386 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.329001 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.351027 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.369163 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.389569 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.410045 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.429005 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.449540 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.469448 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.490130 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.509634 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.529869 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.568553 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxwk\" (UniqueName: \"kubernetes.io/projected/d36d9643-d39a-480a-8caa-2a318102ef5b-kube-api-access-hhxwk\") pod \"openshift-controller-manager-operator-756b6f6bc6-v45wp\" (UID: \"d36d9643-d39a-480a-8caa-2a318102ef5b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.587446 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-kube-api-access-4ggdv\") pod \"image-pruner-29397600-gj2zp\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.608331 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrwp\" (UniqueName: \"kubernetes.io/projected/bfb3cd0c-631e-4904-ad6c-bd2393d94c46-kube-api-access-dxrwp\") pod \"machine-approver-56656f9798-26wwd\" (UID: \"bfb3cd0c-631e-4904-ad6c-bd2393d94c46\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.624267 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvnjf\" (UniqueName: \"kubernetes.io/projected/d39201fe-fa08-49ca-adec-15441d9cbaa5-kube-api-access-hvnjf\") pod \"controller-manager-879f6c89f-7pqx6\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.645774 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzc6v\" (UniqueName: \"kubernetes.io/projected/100014ec-26b2-4311-82fd-41fa1228c011-kube-api-access-vzc6v\") pod \"cluster-samples-operator-665b6dd947-vl776\" (UID: \"100014ec-26b2-4311-82fd-41fa1228c011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.664096 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfgf\" (UniqueName: \"kubernetes.io/projected/a1a67a24-9440-4409-98d2-3ddbc8dda335-kube-api-access-rdfgf\") pod \"openshift-apiserver-operator-796bbdcf4f-tdbkj\" (UID: \"a1a67a24-9440-4409-98d2-3ddbc8dda335\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.671408 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.682030 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.685685 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfznb\" (UniqueName: \"kubernetes.io/projected/203f4e5b-490a-43cb-90db-8beed3234d54-kube-api-access-rfznb\") pod \"apiserver-76f77b778f-z7mnv\" (UID: \"203f4e5b-490a-43cb-90db-8beed3234d54\") " pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.694789 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.712914 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lp2v\" (UniqueName: \"kubernetes.io/projected/6e63d320-241c-4f1e-ace2-6b28a8d9d338-kube-api-access-2lp2v\") pod \"route-controller-manager-6576b87f9c-52h52\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.724947 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.725430 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf6mq\" (UniqueName: \"kubernetes.io/projected/6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022-kube-api-access-mf6mq\") pod \"machine-api-operator-5694c8668f-njxkk\" (UID: \"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.728194 4743 request.go:700] Waited for 1.914967019s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.735307 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.747998 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8h5\" (UniqueName: \"kubernetes.io/projected/e6309be1-c0c3-4a38-9770-85295aec41ae-kube-api-access-gf8h5\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.768300 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thz5x\" (UniqueName: \"kubernetes.io/projected/a529fd56-b206-4ec0-984e-addbd17374ee-kube-api-access-thz5x\") pod \"downloads-7954f5f757-dfr5p\" (UID: \"a529fd56-b206-4ec0-984e-addbd17374ee\") " pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.783657 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.787635 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rgd\" (UniqueName: \"kubernetes.io/projected/10760937-904d-4004-837d-66e5e3dfe95f-kube-api-access-22rgd\") pod \"console-operator-58897d9998-jjr84\" (UID: \"10760937-904d-4004-837d-66e5e3dfe95f\") " pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.809757 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rqq5\" (UniqueName: \"kubernetes.io/projected/c9260cd3-3e10-47fe-b6f9-806bc90621fd-kube-api-access-4rqq5\") pod \"oauth-openshift-558db77b4-zgddj\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.824173 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88p5\" (UniqueName: \"kubernetes.io/projected/8ab03a1e-bf7f-4ad0-89da-d129b78994e0-kube-api-access-f88p5\") pod \"authentication-operator-69f744f599-rdgvc\" (UID: \"8ab03a1e-bf7f-4ad0-89da-d129b78994e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.833381 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.850715 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6309be1-c0c3-4a38-9770-85295aec41ae-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7fd46\" (UID: \"e6309be1-c0c3-4a38-9770-85295aec41ae\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.851924 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.870474 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhcnz\" (UniqueName: \"kubernetes.io/projected/bf45d2c0-250b-4c0f-8fe4-4eee3618a17e-kube-api-access-bhcnz\") pod \"etcd-operator-b45778765-4xqck\" (UID: \"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.888855 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.889349 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.909793 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.929070 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.951107 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.969043 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.989908 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 23 00:09:11 crc kubenswrapper[4743]: I1123 00:09:11.999465 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.000249 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.007980 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.009671 4743 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.014703 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.029553 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.031392 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.040502 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.048703 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.072447 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5rqp\" (UniqueName: \"kubernetes.io/projected/42f92a20-3051-4cc0-861c-5b6a58753aaf-kube-api-access-j5rqp\") pod \"apiserver-7bbb656c7d-gdmf8\" (UID: \"42f92a20-3051-4cc0-861c-5b6a58753aaf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.086350 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3672dfeb-7ed6-4281-bd84-7588c3df430a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qqlrm\" (UID: \"3672dfeb-7ed6-4281-bd84-7588c3df430a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.092096 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.104720 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/187ddc60-f070-4386-a8f8-b2ae8fd2ed08-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fkpf9\" (UID: \"187ddc60-f070-4386-a8f8-b2ae8fd2ed08\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.129303 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpd8\" (UniqueName: \"kubernetes.io/projected/90e048af-50eb-4557-83e1-e19979685ded-kube-api-access-fdpd8\") pod \"package-server-manager-789f6589d5-fx5d9\" (UID: \"90e048af-50eb-4557-83e1-e19979685ded\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.134456 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.140717 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.150924 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7de29f4-885a-469d-843e-3762c81f5379-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6nm8l\" (UID: \"d7de29f4-885a-469d-843e-3762c81f5379\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.164579 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdm7m\" (UniqueName: \"kubernetes.io/projected/fccac410-c6c3-454f-938c-64beeb04e317-kube-api-access-jdm7m\") pod \"console-f9d7485db-k4dzd\" (UID: \"fccac410-c6c3-454f-938c-64beeb04e317\") " pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.170446 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.178269 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pqx6"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.178714 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29397600-gj2zp"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.189257 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.207592 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.211822 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87578262-f89f-4b5c-92ab-a94000397e31-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.211883 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/261cf90a-79d7-4b54-9019-7a25dc991ec7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.212069 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqmt\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-kube-api-access-blqmt\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.212225 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55e0a41-c894-4560-ae81-513ecb867548-serving-cert\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.212272 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-registry-tls\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.212323 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-metrics-certs\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.212384 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-stats-auth\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.220558 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224039 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-default-certificate\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224291 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/261cf90a-79d7-4b54-9019-7a25dc991ec7-metrics-tls\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224382 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87578262-f89f-4b5c-92ab-a94000397e31-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224433 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqsr\" (UniqueName: \"kubernetes.io/projected/ff4a033a-c60f-4a28-8980-9bcbbdd88ba7-kube-api-access-zpqsr\") pod \"migrator-59844c95c7-kdh94\" (UID: \"ff4a033a-c60f-4a28-8980-9bcbbdd88ba7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224587 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-bound-sa-token\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224671 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/261cf90a-79d7-4b54-9019-7a25dc991ec7-trusted-ca\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.224829 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pkt\" (UniqueName: \"kubernetes.io/projected/e59f68b6-cb09-4c13-acc5-eb4b713711da-kube-api-access-52pkt\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.225713 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xmv\" (UniqueName: \"kubernetes.io/projected/3b9b1f5e-0438-465c-8c1d-a68f26ed23db-kube-api-access-c9xmv\") pod \"dns-operator-744455d44c-mdb2v\" (UID: \"3b9b1f5e-0438-465c-8c1d-a68f26ed23db\") " pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.227349 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhkdn\" (UniqueName: \"kubernetes.io/projected/261cf90a-79d7-4b54-9019-7a25dc991ec7-kube-api-access-lhkdn\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228031 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-registry-certificates\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228086 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b9b1f5e-0438-465c-8c1d-a68f26ed23db-metrics-tls\") pod \"dns-operator-744455d44c-mdb2v\" (UID: \"3b9b1f5e-0438-465c-8c1d-a68f26ed23db\") " pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228220 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228260 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59f68b6-cb09-4c13-acc5-eb4b713711da-service-ca-bundle\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228301 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-trusted-ca\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228389 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b55e0a41-c894-4560-ae81-513ecb867548-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.228460 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b528j\" (UniqueName: \"kubernetes.io/projected/b55e0a41-c894-4560-ae81-513ecb867548-kube-api-access-b528j\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.230049 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:12.730017556 +0000 UTC m=+144.808115683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.231339 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776"] Nov 23 00:09:12 crc kubenswrapper[4743]: W1123 00:09:12.235731 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39201fe_fa08_49ca_adec_15441d9cbaa5.slice/crio-54a5001214ecda3e593468fdbb340b605ef7cc442d4235039757517db3e0fea6 WatchSource:0}: Error finding container 54a5001214ecda3e593468fdbb340b605ef7cc442d4235039757517db3e0fea6: Status 404 returned error can't find the container with id 54a5001214ecda3e593468fdbb340b605ef7cc442d4235039757517db3e0fea6 Nov 23 00:09:12 crc kubenswrapper[4743]: W1123 00:09:12.271149 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd36d9643_d39a_480a_8caa_2a318102ef5b.slice/crio-7f9d804fc37f55786453c93d24a89b4844e31350625cb578a564217a761a6d98 WatchSource:0}: Error finding container 7f9d804fc37f55786453c93d24a89b4844e31350625cb578a564217a761a6d98: Status 404 returned error can't find the container with id 7f9d804fc37f55786453c93d24a89b4844e31350625cb578a564217a761a6d98 Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331014 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331203 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/963d8537-d384-4feb-a776-da74096c0884-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.331232 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:12.831198236 +0000 UTC m=+144.909296483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331362 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2d13e04-dd95-4e42-97da-6a9ff04fd687-signing-cabundle\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331419 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b55e0a41-c894-4560-ae81-513ecb867548-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331505 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b528j\" (UniqueName: \"kubernetes.io/projected/b55e0a41-c894-4560-ae81-513ecb867548-kube-api-access-b528j\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331578 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87578262-f89f-4b5c-92ab-a94000397e31-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331621 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/261cf90a-79d7-4b54-9019-7a25dc991ec7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331670 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2d13e04-dd95-4e42-97da-6a9ff04fd687-signing-key\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331699 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-plugins-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331788 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c5f1f39-34be-4d98-af43-81b98c87f775-srv-cert\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331818 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-socket-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331854 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqmt\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-kube-api-access-blqmt\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331880 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55e0a41-c894-4560-ae81-513ecb867548-serving-cert\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331938 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-registry-tls\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331985 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac9135b1-ff1e-460b-8c71-84b1f15317fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hw6zj\" (UID: \"ac9135b1-ff1e-460b-8c71-84b1f15317fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.331985 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b55e0a41-c894-4560-ae81-513ecb867548-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.332036 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmgj\" (UniqueName: \"kubernetes.io/projected/ab63bc02-007b-4a61-9355-7475eb5f4db2-kube-api-access-rtmgj\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.332103 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0058ec17-3783-415d-8d1a-dee576ffa3a3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.332394 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87578262-f89f-4b5c-92ab-a94000397e31-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.332407 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/693544fb-054a-4b31-92a4-c1f89a7ee729-node-bootstrap-token\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.332459 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-metrics-certs\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.332524 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0058ec17-3783-415d-8d1a-dee576ffa3a3-srv-cert\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.333148 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-stats-auth\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.333336 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3fa480-324b-4a69-a052-2a196e8daad3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.333509 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-default-certificate\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334173 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2dcx\" (UniqueName: \"kubernetes.io/projected/d2d13e04-dd95-4e42-97da-6a9ff04fd687-kube-api-access-h2dcx\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334212 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/693544fb-054a-4b31-92a4-c1f89a7ee729-certs\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334250 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-registration-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334285 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec467b25-b224-4334-a46b-9f599c60138f-proxy-tls\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334313 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c143775-a872-437d-874a-1f30df4361b4-serving-cert\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334365 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h756p\" (UniqueName: \"kubernetes.io/projected/7c5f1f39-34be-4d98-af43-81b98c87f775-kube-api-access-h756p\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334388 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af4098ab-953c-4329-bec5-ec5a44f01f8e-images\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334415 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c5f1f39-34be-4d98-af43-81b98c87f775-profile-collector-cert\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.336854 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-registry-tls\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337058 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55e0a41-c894-4560-ae81-513ecb867548-serving-cert\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337405 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-default-certificate\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.334468 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkxm\" (UniqueName: \"kubernetes.io/projected/963d8537-d384-4feb-a776-da74096c0884-kube-api-access-vmkxm\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337659 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/261cf90a-79d7-4b54-9019-7a25dc991ec7-metrics-tls\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337684 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrbk\" (UniqueName: \"kubernetes.io/projected/be3fa480-324b-4a69-a052-2a196e8daad3-kube-api-access-4wrbk\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337714 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-csi-data-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337733 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d70cc2b9-d502-42fb-9f99-9473b57b2293-tmpfs\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337778 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87578262-f89f-4b5c-92ab-a94000397e31-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337798 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqsr\" (UniqueName: \"kubernetes.io/projected/ff4a033a-c60f-4a28-8980-9bcbbdd88ba7-kube-api-access-zpqsr\") pod \"migrator-59844c95c7-kdh94\" (UID: \"ff4a033a-c60f-4a28-8980-9bcbbdd88ba7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337817 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3fa480-324b-4a69-a052-2a196e8daad3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337880 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-bound-sa-token\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337908 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/261cf90a-79d7-4b54-9019-7a25dc991ec7-trusted-ca\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337951 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsz2g\" (UniqueName: \"kubernetes.io/projected/ec467b25-b224-4334-a46b-9f599c60138f-kube-api-access-zsz2g\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.337993 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dwr\" (UniqueName: \"kubernetes.io/projected/3e4abcae-ad51-404d-b5e9-e0d87c08b639-kube-api-access-w9dwr\") pod \"multus-admission-controller-857f4d67dd-64f58\" (UID: \"3e4abcae-ad51-404d-b5e9-e0d87c08b639\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.338012 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/963d8537-d384-4feb-a776-da74096c0884-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.338036 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52pkt\" (UniqueName: \"kubernetes.io/projected/e59f68b6-cb09-4c13-acc5-eb4b713711da-kube-api-access-52pkt\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.338067 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9xmv\" (UniqueName: \"kubernetes.io/projected/3b9b1f5e-0438-465c-8c1d-a68f26ed23db-kube-api-access-c9xmv\") pod \"dns-operator-744455d44c-mdb2v\" (UID: \"3b9b1f5e-0438-465c-8c1d-a68f26ed23db\") " pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.338090 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab63bc02-007b-4a61-9355-7475eb5f4db2-config-volume\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.338110 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2sg\" (UniqueName: \"kubernetes.io/projected/ac9135b1-ff1e-460b-8c71-84b1f15317fa-kube-api-access-2f2sg\") pod \"control-plane-machine-set-operator-78cbb6b69f-hw6zj\" (UID: \"ac9135b1-ff1e-460b-8c71-84b1f15317fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.338887 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-stats-auth\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.339647 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e59f68b6-cb09-4c13-acc5-eb4b713711da-metrics-certs\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.340102 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhkdn\" (UniqueName: \"kubernetes.io/projected/261cf90a-79d7-4b54-9019-7a25dc991ec7-kube-api-access-lhkdn\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.340137 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d70cc2b9-d502-42fb-9f99-9473b57b2293-apiservice-cert\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.340157 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e4abcae-ad51-404d-b5e9-e0d87c08b639-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-64f58\" (UID: \"3e4abcae-ad51-404d-b5e9-e0d87c08b639\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.341688 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bea571d4-5d83-4f27-93c0-38ca408f0841-config-volume\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.341770 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7fpm\" (UniqueName: \"kubernetes.io/projected/76c94f30-89a4-408d-8168-a49eb3869a39-kube-api-access-z7fpm\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.341794 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b823bb3-b726-4881-9529-b40a3847704b-cert\") pod \"ingress-canary-pc54n\" (UID: \"0b823bb3-b726-4881-9529-b40a3847704b\") " pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.341863 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af4098ab-953c-4329-bec5-ec5a44f01f8e-proxy-tls\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.341951 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-registry-certificates\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.341968 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b9b1f5e-0438-465c-8c1d-a68f26ed23db-metrics-tls\") pod \"dns-operator-744455d44c-mdb2v\" (UID: \"3b9b1f5e-0438-465c-8c1d-a68f26ed23db\") " pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.342574 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djvq9\" (UniqueName: \"kubernetes.io/projected/0b823bb3-b726-4881-9529-b40a3847704b-kube-api-access-djvq9\") pod \"ingress-canary-pc54n\" (UID: \"0b823bb3-b726-4881-9529-b40a3847704b\") " pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.342852 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbz8t\" (UniqueName: \"kubernetes.io/projected/af4098ab-953c-4329-bec5-ec5a44f01f8e-kube-api-access-lbz8t\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.342928 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcp2x\" (UniqueName: \"kubernetes.io/projected/693544fb-054a-4b31-92a4-c1f89a7ee729-kube-api-access-jcp2x\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.342985 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d70cc2b9-d502-42fb-9f99-9473b57b2293-webhook-cert\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343014 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec467b25-b224-4334-a46b-9f599c60138f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343042 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343065 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x576\" (UniqueName: \"kubernetes.io/projected/d70cc2b9-d502-42fb-9f99-9473b57b2293-kube-api-access-8x576\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343087 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bea571d4-5d83-4f27-93c0-38ca408f0841-metrics-tls\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343114 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-mountpoint-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343137 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdnn4\" (UniqueName: \"kubernetes.io/projected/0058ec17-3783-415d-8d1a-dee576ffa3a3-kube-api-access-gdnn4\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343162 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5p5g\" (UniqueName: \"kubernetes.io/projected/bea571d4-5d83-4f27-93c0-38ca408f0841-kube-api-access-t5p5g\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343186 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c143775-a872-437d-874a-1f30df4361b4-config\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343275 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zwl\" (UniqueName: \"kubernetes.io/projected/5c143775-a872-437d-874a-1f30df4361b4-kube-api-access-47zwl\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343321 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-trusted-ca\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343344 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59f68b6-cb09-4c13-acc5-eb4b713711da-service-ca-bundle\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343366 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab63bc02-007b-4a61-9355-7475eb5f4db2-secret-volume\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.343575 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:12.843553784 +0000 UTC m=+144.921651911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.343826 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/261cf90a-79d7-4b54-9019-7a25dc991ec7-trusted-ca\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.344091 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af4098ab-953c-4329-bec5-ec5a44f01f8e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: W1123 00:09:12.352400 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb4b460_4fcd_4dc3_b541_9b6852a7c0ae.slice/crio-9304a0c64fb29a2a633517d1c85b80055fde4e0d6be794dc4587ccbc0da605f1 WatchSource:0}: Error finding container 9304a0c64fb29a2a633517d1c85b80055fde4e0d6be794dc4587ccbc0da605f1: Status 404 returned error can't find the container with id 9304a0c64fb29a2a633517d1c85b80055fde4e0d6be794dc4587ccbc0da605f1 Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.356174 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b9b1f5e-0438-465c-8c1d-a68f26ed23db-metrics-tls\") pod \"dns-operator-744455d44c-mdb2v\" (UID: \"3b9b1f5e-0438-465c-8c1d-a68f26ed23db\") " pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.356741 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/261cf90a-79d7-4b54-9019-7a25dc991ec7-metrics-tls\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.362667 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.383946 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-registry-certificates\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.387029 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87578262-f89f-4b5c-92ab-a94000397e31-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.387120 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59f68b6-cb09-4c13-acc5-eb4b713711da-service-ca-bundle\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.388550 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-trusted-ca\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.391668 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b528j\" (UniqueName: \"kubernetes.io/projected/b55e0a41-c894-4560-ae81-513ecb867548-kube-api-access-b528j\") pod \"openshift-config-operator-7777fb866f-xgh54\" (UID: \"b55e0a41-c894-4560-ae81-513ecb867548\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.392940 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqmt\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-kube-api-access-blqmt\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.400831 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.411286 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/261cf90a-79d7-4b54-9019-7a25dc991ec7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.413585 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.428064 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqsr\" (UniqueName: \"kubernetes.io/projected/ff4a033a-c60f-4a28-8980-9bcbbdd88ba7-kube-api-access-zpqsr\") pod \"migrator-59844c95c7-kdh94\" (UID: \"ff4a033a-c60f-4a28-8980-9bcbbdd88ba7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445060 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445308 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsz2g\" (UniqueName: \"kubernetes.io/projected/ec467b25-b224-4334-a46b-9f599c60138f-kube-api-access-zsz2g\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445343 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dwr\" (UniqueName: \"kubernetes.io/projected/3e4abcae-ad51-404d-b5e9-e0d87c08b639-kube-api-access-w9dwr\") pod \"multus-admission-controller-857f4d67dd-64f58\" (UID: \"3e4abcae-ad51-404d-b5e9-e0d87c08b639\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445368 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/963d8537-d384-4feb-a776-da74096c0884-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445414 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2sg\" (UniqueName: \"kubernetes.io/projected/ac9135b1-ff1e-460b-8c71-84b1f15317fa-kube-api-access-2f2sg\") pod \"control-plane-machine-set-operator-78cbb6b69f-hw6zj\" (UID: \"ac9135b1-ff1e-460b-8c71-84b1f15317fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445440 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab63bc02-007b-4a61-9355-7475eb5f4db2-config-volume\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445467 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d70cc2b9-d502-42fb-9f99-9473b57b2293-apiservice-cert\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445511 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e4abcae-ad51-404d-b5e9-e0d87c08b639-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-64f58\" (UID: \"3e4abcae-ad51-404d-b5e9-e0d87c08b639\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445540 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bea571d4-5d83-4f27-93c0-38ca408f0841-config-volume\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445563 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7fpm\" (UniqueName: \"kubernetes.io/projected/76c94f30-89a4-408d-8168-a49eb3869a39-kube-api-access-z7fpm\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445594 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b823bb3-b726-4881-9529-b40a3847704b-cert\") pod \"ingress-canary-pc54n\" (UID: \"0b823bb3-b726-4881-9529-b40a3847704b\") " pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445617 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af4098ab-953c-4329-bec5-ec5a44f01f8e-proxy-tls\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445642 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djvq9\" (UniqueName: \"kubernetes.io/projected/0b823bb3-b726-4881-9529-b40a3847704b-kube-api-access-djvq9\") pod \"ingress-canary-pc54n\" (UID: \"0b823bb3-b726-4881-9529-b40a3847704b\") " pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445668 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbz8t\" (UniqueName: \"kubernetes.io/projected/af4098ab-953c-4329-bec5-ec5a44f01f8e-kube-api-access-lbz8t\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445690 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcp2x\" (UniqueName: \"kubernetes.io/projected/693544fb-054a-4b31-92a4-c1f89a7ee729-kube-api-access-jcp2x\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445724 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d70cc2b9-d502-42fb-9f99-9473b57b2293-webhook-cert\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445755 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x576\" (UniqueName: \"kubernetes.io/projected/d70cc2b9-d502-42fb-9f99-9473b57b2293-kube-api-access-8x576\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445783 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec467b25-b224-4334-a46b-9f599c60138f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445808 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdnn4\" (UniqueName: \"kubernetes.io/projected/0058ec17-3783-415d-8d1a-dee576ffa3a3-kube-api-access-gdnn4\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.445847 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:12.945826621 +0000 UTC m=+145.023924748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.445826 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bea571d4-5d83-4f27-93c0-38ca408f0841-metrics-tls\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446202 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-mountpoint-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446254 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47zwl\" (UniqueName: \"kubernetes.io/projected/5c143775-a872-437d-874a-1f30df4361b4-kube-api-access-47zwl\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446281 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5p5g\" (UniqueName: \"kubernetes.io/projected/bea571d4-5d83-4f27-93c0-38ca408f0841-kube-api-access-t5p5g\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446302 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c143775-a872-437d-874a-1f30df4361b4-config\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446355 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab63bc02-007b-4a61-9355-7475eb5f4db2-secret-volume\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446381 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af4098ab-953c-4329-bec5-ec5a44f01f8e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446435 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/963d8537-d384-4feb-a776-da74096c0884-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446460 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2d13e04-dd95-4e42-97da-6a9ff04fd687-signing-cabundle\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446557 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-plugins-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446614 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2d13e04-dd95-4e42-97da-6a9ff04fd687-signing-key\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446640 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c5f1f39-34be-4d98-af43-81b98c87f775-srv-cert\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446706 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-socket-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446764 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac9135b1-ff1e-460b-8c71-84b1f15317fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hw6zj\" (UID: \"ac9135b1-ff1e-460b-8c71-84b1f15317fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446793 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmgj\" (UniqueName: \"kubernetes.io/projected/ab63bc02-007b-4a61-9355-7475eb5f4db2-kube-api-access-rtmgj\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446850 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0058ec17-3783-415d-8d1a-dee576ffa3a3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446873 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/693544fb-054a-4b31-92a4-c1f89a7ee729-node-bootstrap-token\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.446939 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0058ec17-3783-415d-8d1a-dee576ffa3a3-srv-cert\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447028 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3fa480-324b-4a69-a052-2a196e8daad3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447076 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2dcx\" (UniqueName: \"kubernetes.io/projected/d2d13e04-dd95-4e42-97da-6a9ff04fd687-kube-api-access-h2dcx\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447136 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/693544fb-054a-4b31-92a4-c1f89a7ee729-certs\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447159 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-registration-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447218 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec467b25-b224-4334-a46b-9f599c60138f-proxy-tls\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447289 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c143775-a872-437d-874a-1f30df4361b4-serving-cert\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447314 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h756p\" (UniqueName: \"kubernetes.io/projected/7c5f1f39-34be-4d98-af43-81b98c87f775-kube-api-access-h756p\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447354 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/963d8537-d384-4feb-a776-da74096c0884-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448035 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af4098ab-953c-4329-bec5-ec5a44f01f8e-images\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.447371 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af4098ab-953c-4329-bec5-ec5a44f01f8e-images\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448119 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c5f1f39-34be-4d98-af43-81b98c87f775-profile-collector-cert\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448156 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkxm\" (UniqueName: \"kubernetes.io/projected/963d8537-d384-4feb-a776-da74096c0884-kube-api-access-vmkxm\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448187 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wrbk\" (UniqueName: \"kubernetes.io/projected/be3fa480-324b-4a69-a052-2a196e8daad3-kube-api-access-4wrbk\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448227 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d70cc2b9-d502-42fb-9f99-9473b57b2293-tmpfs\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448256 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-csi-data-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.448296 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3fa480-324b-4a69-a052-2a196e8daad3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.449123 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3fa480-324b-4a69-a052-2a196e8daad3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.451397 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d70cc2b9-d502-42fb-9f99-9473b57b2293-tmpfs\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.451527 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-csi-data-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.451563 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec467b25-b224-4334-a46b-9f599c60138f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.451652 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bea571d4-5d83-4f27-93c0-38ca408f0841-config-volume\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.452424 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-mountpoint-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.452801 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c143775-a872-437d-874a-1f30df4361b4-config\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.452993 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-socket-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.454508 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-plugins-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.454762 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76c94f30-89a4-408d-8168-a49eb3869a39-registration-dir\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.454857 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af4098ab-953c-4329-bec5-ec5a44f01f8e-proxy-tls\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.455381 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-bound-sa-token\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.456057 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af4098ab-953c-4329-bec5-ec5a44f01f8e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.456277 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2d13e04-dd95-4e42-97da-6a9ff04fd687-signing-cabundle\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.456847 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e4abcae-ad51-404d-b5e9-e0d87c08b639-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-64f58\" (UID: \"3e4abcae-ad51-404d-b5e9-e0d87c08b639\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.457162 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c5f1f39-34be-4d98-af43-81b98c87f775-srv-cert\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.466614 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d70cc2b9-d502-42fb-9f99-9473b57b2293-webhook-cert\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.471777 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.473312 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2d13e04-dd95-4e42-97da-6a9ff04fd687-signing-key\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.474900 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c5f1f39-34be-4d98-af43-81b98c87f775-profile-collector-cert\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.475074 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3fa480-324b-4a69-a052-2a196e8daad3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.475359 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b823bb3-b726-4881-9529-b40a3847704b-cert\") pod \"ingress-canary-pc54n\" (UID: \"0b823bb3-b726-4881-9529-b40a3847704b\") " pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.478912 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab63bc02-007b-4a61-9355-7475eb5f4db2-secret-volume\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.479118 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/693544fb-054a-4b31-92a4-c1f89a7ee729-certs\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.479247 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d70cc2b9-d502-42fb-9f99-9473b57b2293-apiservice-cert\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.481064 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7mnv"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.482406 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bea571d4-5d83-4f27-93c0-38ca408f0841-metrics-tls\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.482910 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0058ec17-3783-415d-8d1a-dee576ffa3a3-srv-cert\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.483077 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec467b25-b224-4334-a46b-9f599c60138f-proxy-tls\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.483392 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac9135b1-ff1e-460b-8c71-84b1f15317fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hw6zj\" (UID: \"ac9135b1-ff1e-460b-8c71-84b1f15317fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.485117 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/963d8537-d384-4feb-a776-da74096c0884-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.485175 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0058ec17-3783-415d-8d1a-dee576ffa3a3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.488230 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/693544fb-054a-4b31-92a4-c1f89a7ee729-node-bootstrap-token\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.488421 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52pkt\" (UniqueName: \"kubernetes.io/projected/e59f68b6-cb09-4c13-acc5-eb4b713711da-kube-api-access-52pkt\") pod \"router-default-5444994796-sqgm6\" (UID: \"e59f68b6-cb09-4c13-acc5-eb4b713711da\") " pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.489655 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9xmv\" (UniqueName: \"kubernetes.io/projected/3b9b1f5e-0438-465c-8c1d-a68f26ed23db-kube-api-access-c9xmv\") pod \"dns-operator-744455d44c-mdb2v\" (UID: \"3b9b1f5e-0438-465c-8c1d-a68f26ed23db\") " pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.490017 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c143775-a872-437d-874a-1f30df4361b4-serving-cert\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.529644 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhkdn\" (UniqueName: \"kubernetes.io/projected/261cf90a-79d7-4b54-9019-7a25dc991ec7-kube-api-access-lhkdn\") pod \"ingress-operator-5b745b69d9-4vhsd\" (UID: \"261cf90a-79d7-4b54-9019-7a25dc991ec7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.549374 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2sg\" (UniqueName: \"kubernetes.io/projected/ac9135b1-ff1e-460b-8c71-84b1f15317fa-kube-api-access-2f2sg\") pod \"control-plane-machine-set-operator-78cbb6b69f-hw6zj\" (UID: \"ac9135b1-ff1e-460b-8c71-84b1f15317fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.554319 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.555072 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.055051556 +0000 UTC m=+145.133149683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.570406 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsz2g\" (UniqueName: \"kubernetes.io/projected/ec467b25-b224-4334-a46b-9f599c60138f-kube-api-access-zsz2g\") pod \"machine-config-controller-84d6567774-2pjp2\" (UID: \"ec467b25-b224-4334-a46b-9f599c60138f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: W1123 00:09:12.575592 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e63d320_241c_4f1e_ace2_6b28a8d9d338.slice/crio-6e89f99e39ed436eb7fa026c32877aa90cdc44bb5e3960d8f0c191b502f775e7 WatchSource:0}: Error finding container 6e89f99e39ed436eb7fa026c32877aa90cdc44bb5e3960d8f0c191b502f775e7: Status 404 returned error can't find the container with id 6e89f99e39ed436eb7fa026c32877aa90cdc44bb5e3960d8f0c191b502f775e7 Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.590456 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dwr\" (UniqueName: \"kubernetes.io/projected/3e4abcae-ad51-404d-b5e9-e0d87c08b639-kube-api-access-w9dwr\") pod \"multus-admission-controller-857f4d67dd-64f58\" (UID: \"3e4abcae-ad51-404d-b5e9-e0d87c08b639\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.611283 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgddj"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.615675 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djvq9\" (UniqueName: \"kubernetes.io/projected/0b823bb3-b726-4881-9529-b40a3847704b-kube-api-access-djvq9\") pod \"ingress-canary-pc54n\" (UID: \"0b823bb3-b726-4881-9529-b40a3847704b\") " pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.616942 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" event={"ID":"d36d9643-d39a-480a-8caa-2a318102ef5b","Type":"ContainerStarted","Data":"7f9d804fc37f55786453c93d24a89b4844e31350625cb578a564217a761a6d98"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.617757 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29397600-gj2zp" event={"ID":"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae","Type":"ContainerStarted","Data":"9304a0c64fb29a2a633517d1c85b80055fde4e0d6be794dc4587ccbc0da605f1"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.619603 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" event={"ID":"203f4e5b-490a-43cb-90db-8beed3234d54","Type":"ContainerStarted","Data":"366c481314ecd16838254a167f94a701315112c1e3ad5695c64e475722c24753"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.620408 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" event={"ID":"d39201fe-fa08-49ca-adec-15441d9cbaa5","Type":"ContainerStarted","Data":"54a5001214ecda3e593468fdbb340b605ef7cc442d4235039757517db3e0fea6"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.621593 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" event={"ID":"a1a67a24-9440-4409-98d2-3ddbc8dda335","Type":"ContainerStarted","Data":"13e51c44eeb4c6a0acc7f226256fa5aa6549e77481868c26197a43f92dcd78bf"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.623028 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" event={"ID":"bfb3cd0c-631e-4904-ad6c-bd2393d94c46","Type":"ContainerStarted","Data":"da4ae50cef06a2977e53986789e7b245871f5f3a73a4b5dd587f0e88ee753c1d"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.623999 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" event={"ID":"6e63d320-241c-4f1e-ace2-6b28a8d9d338","Type":"ContainerStarted","Data":"6e89f99e39ed436eb7fa026c32877aa90cdc44bb5e3960d8f0c191b502f775e7"} Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.648209 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcp2x\" (UniqueName: \"kubernetes.io/projected/693544fb-054a-4b31-92a4-c1f89a7ee729-kube-api-access-jcp2x\") pod \"machine-config-server-v7tpd\" (UID: \"693544fb-054a-4b31-92a4-c1f89a7ee729\") " pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.649603 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbz8t\" (UniqueName: \"kubernetes.io/projected/af4098ab-953c-4329-bec5-ec5a44f01f8e-kube-api-access-lbz8t\") pod \"machine-config-operator-74547568cd-hmpwd\" (UID: \"af4098ab-953c-4329-bec5-ec5a44f01f8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.650087 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab63bc02-007b-4a61-9355-7475eb5f4db2-config-volume\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.655401 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.655764 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.155734754 +0000 UTC m=+145.233832891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.655916 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.656352 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.156340149 +0000 UTC m=+145.234438276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.666464 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x576\" (UniqueName: \"kubernetes.io/projected/d70cc2b9-d502-42fb-9f99-9473b57b2293-kube-api-access-8x576\") pod \"packageserver-d55dfcdfc-b8wg9\" (UID: \"d70cc2b9-d502-42fb-9f99-9473b57b2293\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.666844 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.681293 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.684146 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.712885 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkxm\" (UniqueName: \"kubernetes.io/projected/963d8537-d384-4feb-a776-da74096c0884-kube-api-access-vmkxm\") pod \"marketplace-operator-79b997595-rr7k6\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.723159 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.727231 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.729514 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7fpm\" (UniqueName: \"kubernetes.io/projected/76c94f30-89a4-408d-8168-a49eb3869a39-kube-api-access-z7fpm\") pod \"csi-hostpathplugin-l6msj\" (UID: \"76c94f30-89a4-408d-8168-a49eb3869a39\") " pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.729796 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wrbk\" (UniqueName: \"kubernetes.io/projected/be3fa480-324b-4a69-a052-2a196e8daad3-kube-api-access-4wrbk\") pod \"kube-storage-version-migrator-operator-b67b599dd-9jcb9\" (UID: \"be3fa480-324b-4a69-a052-2a196e8daad3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.748002 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.749589 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2dcx\" (UniqueName: \"kubernetes.io/projected/d2d13e04-dd95-4e42-97da-6a9ff04fd687-kube-api-access-h2dcx\") pod \"service-ca-9c57cc56f-x99kl\" (UID: \"d2d13e04-dd95-4e42-97da-6a9ff04fd687\") " pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.750119 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.755254 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.756157 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.764052 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.764961 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.264926148 +0000 UTC m=+145.343024285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.775697 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdnn4\" (UniqueName: \"kubernetes.io/projected/0058ec17-3783-415d-8d1a-dee576ffa3a3-kube-api-access-gdnn4\") pod \"olm-operator-6b444d44fb-5mslk\" (UID: \"0058ec17-3783-415d-8d1a-dee576ffa3a3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.781675 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.787031 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.789406 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmgj\" (UniqueName: \"kubernetes.io/projected/ab63bc02-007b-4a61-9355-7475eb5f4db2-kube-api-access-rtmgj\") pod \"collect-profiles-29397600-qg2pc\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.796959 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.804560 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.818146 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47zwl\" (UniqueName: \"kubernetes.io/projected/5c143775-a872-437d-874a-1f30df4361b4-kube-api-access-47zwl\") pod \"service-ca-operator-777779d784-q85t8\" (UID: \"5c143775-a872-437d-874a-1f30df4361b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.818568 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.826883 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.833973 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.842114 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.846291 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h756p\" (UniqueName: \"kubernetes.io/projected/7c5f1f39-34be-4d98-af43-81b98c87f775-kube-api-access-h756p\") pod \"catalog-operator-68c6474976-4hg6z\" (UID: \"7c5f1f39-34be-4d98-af43-81b98c87f775\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.850051 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.856908 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-njxkk"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.858102 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pc54n" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.866311 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.866649 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.36663458 +0000 UTC m=+145.444732707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.868789 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4xqck"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.870515 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.872225 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v7tpd" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.878196 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jjr84"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.895141 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.940558 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.943332 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dfr5p"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.956241 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rdgvc"] Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.968141 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.968324 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.468290852 +0000 UTC m=+145.546388979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:12 crc kubenswrapper[4743]: I1123 00:09:12.968565 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:12 crc kubenswrapper[4743]: E1123 00:09:12.968895 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.468880536 +0000 UTC m=+145.546978663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.004599 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5p5g\" (UniqueName: \"kubernetes.io/projected/bea571d4-5d83-4f27-93c0-38ca408f0841-kube-api-access-t5p5g\") pod \"dns-default-g77gl\" (UID: \"bea571d4-5d83-4f27-93c0-38ca408f0841\") " pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.020800 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90e048af_50eb_4557_83e1_e19979685ded.slice/crio-38295b08efbe0f849c72799878fee5baf571b2183907e1ef7c1c757c0eea6b4a WatchSource:0}: Error finding container 38295b08efbe0f849c72799878fee5baf571b2183907e1ef7c1c757c0eea6b4a: Status 404 returned error can't find the container with id 38295b08efbe0f849c72799878fee5baf571b2183907e1ef7c1c757c0eea6b4a Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.040045 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k4dzd"] Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.042001 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.070154 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.070404 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.570369504 +0000 UTC m=+145.648467631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.070574 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.070926 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.570917637 +0000 UTC m=+145.649015764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.164792 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.176695 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.177577 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.677553099 +0000 UTC m=+145.755651226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.192249 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l"] Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.193791 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9"] Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.260345 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6309be1_c0c3_4a38_9770_85295aec41ae.slice/crio-44c7484a1e721177c8492ebacbda119c0c1e71efe1449ad9cd5d69919957e7f2 WatchSource:0}: Error finding container 44c7484a1e721177c8492ebacbda119c0c1e71efe1449ad9cd5d69919957e7f2: Status 404 returned error can't find the container with id 44c7484a1e721177c8492ebacbda119c0c1e71efe1449ad9cd5d69919957e7f2 Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.267122 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3672dfeb_7ed6_4281_bd84_7588c3df430a.slice/crio-31e1e83e29fc6bd3305a7db3503ca747437deb023f92cf41b4a33720e2cb3796 WatchSource:0}: Error finding container 31e1e83e29fc6bd3305a7db3503ca747437deb023f92cf41b4a33720e2cb3796: Status 404 returned error can't find the container with id 31e1e83e29fc6bd3305a7db3503ca747437deb023f92cf41b4a33720e2cb3796 Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.278340 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.278709 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.778696129 +0000 UTC m=+145.856794246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.318094 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf45d2c0_250b_4c0f_8fe4_4eee3618a17e.slice/crio-ea1ac448f6931aaef5057510ca20adb7006c595b592c35b6903c628fd477e8f6 WatchSource:0}: Error finding container ea1ac448f6931aaef5057510ca20adb7006c595b592c35b6903c628fd477e8f6: Status 404 returned error can't find the container with id ea1ac448f6931aaef5057510ca20adb7006c595b592c35b6903c628fd477e8f6 Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.320982 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda529fd56_b206_4ec0_984e_addbd17374ee.slice/crio-c20aa49b2ea22398945858d5194513aa1289fe053212a8d42d25afec4338babd WatchSource:0}: Error finding container c20aa49b2ea22398945858d5194513aa1289fe053212a8d42d25afec4338babd: Status 404 returned error can't find the container with id c20aa49b2ea22398945858d5194513aa1289fe053212a8d42d25afec4338babd Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.321871 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f92a20_3051_4cc0_861c_5b6a58753aaf.slice/crio-1661b76f67f2b8cea674ac36a37a96cb1b6e620a346501f5bab0f01600496f83 WatchSource:0}: Error finding container 1661b76f67f2b8cea674ac36a37a96cb1b6e620a346501f5bab0f01600496f83: Status 404 returned error can't find the container with id 1661b76f67f2b8cea674ac36a37a96cb1b6e620a346501f5bab0f01600496f83 Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.348100 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfccac410_c6c3_454f_938c_64beeb04e317.slice/crio-17db3f23e025410270b50463469b7e80beae435f8a319d92c3dfbbdf2cf58bbe WatchSource:0}: Error finding container 17db3f23e025410270b50463469b7e80beae435f8a319d92c3dfbbdf2cf58bbe: Status 404 returned error can't find the container with id 17db3f23e025410270b50463469b7e80beae435f8a319d92c3dfbbdf2cf58bbe Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.378913 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.379679 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.879179963 +0000 UTC m=+145.957278100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.380385 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.382127 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.882110633 +0000 UTC m=+145.960208760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.483042 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.483315 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.983278174 +0000 UTC m=+146.061376291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.483827 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.484283 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:13.984262647 +0000 UTC m=+146.062360774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.584570 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.584860 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.084824633 +0000 UTC m=+146.162922820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.584962 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.585301 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.085287234 +0000 UTC m=+146.163385361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.593256 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mdb2v"] Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.631749 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" event={"ID":"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e","Type":"ContainerStarted","Data":"ea1ac448f6931aaef5057510ca20adb7006c595b592c35b6903c628fd477e8f6"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.633287 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" event={"ID":"3672dfeb-7ed6-4281-bd84-7588c3df430a","Type":"ContainerStarted","Data":"31e1e83e29fc6bd3305a7db3503ca747437deb023f92cf41b4a33720e2cb3796"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.635437 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" event={"ID":"90e048af-50eb-4557-83e1-e19979685ded","Type":"ContainerStarted","Data":"38295b08efbe0f849c72799878fee5baf571b2183907e1ef7c1c757c0eea6b4a"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.637455 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" event={"ID":"e6309be1-c0c3-4a38-9770-85295aec41ae","Type":"ContainerStarted","Data":"44c7484a1e721177c8492ebacbda119c0c1e71efe1449ad9cd5d69919957e7f2"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.638756 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" event={"ID":"bfb3cd0c-631e-4904-ad6c-bd2393d94c46","Type":"ContainerStarted","Data":"1506672598b8de4cd0fa102a8cadeff534865fe45a26d827bd5a23faa5d8578d"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.639615 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" event={"ID":"d7de29f4-885a-469d-843e-3762c81f5379","Type":"ContainerStarted","Data":"73077c757cd03893fd318a5045db0b94f138a3e884d364b6b7d61fddff3b6721"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.642809 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfr5p" event={"ID":"a529fd56-b206-4ec0-984e-addbd17374ee","Type":"ContainerStarted","Data":"c20aa49b2ea22398945858d5194513aa1289fe053212a8d42d25afec4338babd"} Nov 23 00:09:13 crc kubenswrapper[4743]: W1123 00:09:13.646349 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9b1f5e_0438_465c_8c1d_a68f26ed23db.slice/crio-fe0491127d8f6633cc9d160914e3e46b1ec2f6515a8fb20b60326cc2ba9dc73f WatchSource:0}: Error finding container fe0491127d8f6633cc9d160914e3e46b1ec2f6515a8fb20b60326cc2ba9dc73f: Status 404 returned error can't find the container with id fe0491127d8f6633cc9d160914e3e46b1ec2f6515a8fb20b60326cc2ba9dc73f Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.647210 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" event={"ID":"187ddc60-f070-4386-a8f8-b2ae8fd2ed08","Type":"ContainerStarted","Data":"494c60e8a900ad6472ed05a45d2c6fc15c5f66001f81db0ed5df0515ad46c8c9"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.652233 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xgh54"] Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.654073 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" event={"ID":"c9260cd3-3e10-47fe-b6f9-806bc90621fd","Type":"ContainerStarted","Data":"476f08fc9a02bc558543ca07eb2a3dde8733ea3a90adfb540ee3cf14115689af"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.655808 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k4dzd" event={"ID":"fccac410-c6c3-454f-938c-64beeb04e317","Type":"ContainerStarted","Data":"17db3f23e025410270b50463469b7e80beae435f8a319d92c3dfbbdf2cf58bbe"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.661882 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" event={"ID":"8ab03a1e-bf7f-4ad0-89da-d129b78994e0","Type":"ContainerStarted","Data":"5d6f02308120e7b4a7782586bae403fd209acc8f49e78b31695e6658be2addd3"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.663112 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" event={"ID":"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022","Type":"ContainerStarted","Data":"218abd57fb0932f32d3b839b93ba399369d696f658d22129e01f9ad3c008d4cb"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.664551 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jjr84" event={"ID":"10760937-904d-4004-837d-66e5e3dfe95f","Type":"ContainerStarted","Data":"bf766e0916fadf3e301e471de89c74785ca60d75b6888dbeabc7c8f4bdf35d8f"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.666652 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" event={"ID":"d39201fe-fa08-49ca-adec-15441d9cbaa5","Type":"ContainerStarted","Data":"fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.667685 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" event={"ID":"42f92a20-3051-4cc0-861c-5b6a58753aaf","Type":"ContainerStarted","Data":"1661b76f67f2b8cea674ac36a37a96cb1b6e620a346501f5bab0f01600496f83"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.669949 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sqgm6" event={"ID":"e59f68b6-cb09-4c13-acc5-eb4b713711da","Type":"ContainerStarted","Data":"de7c5fbbc9ff952767f56af9fd2e3ee1a5f6278abc6fb179d51391e7022e4978"} Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.686335 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.686607 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.186566007 +0000 UTC m=+146.264664144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.686684 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.687086 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.187076819 +0000 UTC m=+146.265174946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.788400 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.788623 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.288589188 +0000 UTC m=+146.366687325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.788781 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.789225 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.289209703 +0000 UTC m=+146.367307830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.889708 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.890044 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.390024524 +0000 UTC m=+146.468122651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:13 crc kubenswrapper[4743]: I1123 00:09:13.992496 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:13 crc kubenswrapper[4743]: E1123 00:09:13.993452 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.493437699 +0000 UTC m=+146.571535826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.094623 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.094975 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.594924837 +0000 UTC m=+146.673022974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.095314 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.095808 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.595794808 +0000 UTC m=+146.673892985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.187901 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.198533 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.199186 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.69913989 +0000 UTC m=+146.777238017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.199738 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.200233 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.700206296 +0000 UTC m=+146.778304423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.304166 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.304445 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.804400389 +0000 UTC m=+146.882498516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.304606 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.305113 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.805091246 +0000 UTC m=+146.883189373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.406392 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.406609 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.906560253 +0000 UTC m=+146.984658380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.406693 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.407169 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:14.907158178 +0000 UTC m=+146.985256405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: W1123 00:09:14.427057 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf4098ab_953c_4329_bec5_ec5a44f01f8e.slice/crio-a32e0322bb93ee1c27afc55e27a4a4d3afdde54bf568bda805d6602d0bd2faa3 WatchSource:0}: Error finding container a32e0322bb93ee1c27afc55e27a4a4d3afdde54bf568bda805d6602d0bd2faa3: Status 404 returned error can't find the container with id a32e0322bb93ee1c27afc55e27a4a4d3afdde54bf568bda805d6602d0bd2faa3 Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.483155 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.508443 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.508989 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.008961603 +0000 UTC m=+147.087059730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.520450 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.537442 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.556594 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-q85t8"] Nov 23 00:09:14 crc kubenswrapper[4743]: W1123 00:09:14.562924 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff4a033a_c60f_4a28_8980_9bcbbdd88ba7.slice/crio-2625dc5466ad70678bd8c8ce2e806648bcb65d98b135e1f59d7acbd136701730 WatchSource:0}: Error finding container 2625dc5466ad70678bd8c8ce2e806648bcb65d98b135e1f59d7acbd136701730: Status 404 returned error can't find the container with id 2625dc5466ad70678bd8c8ce2e806648bcb65d98b135e1f59d7acbd136701730 Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.578762 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.610241 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.610780 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.110763539 +0000 UTC m=+147.188861666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.711945 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.712274 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.212217746 +0000 UTC m=+147.290315883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.713022 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.713499 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.213471056 +0000 UTC m=+147.291569183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.761864 4743 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-52h52 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.761915 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.782757 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" podStartSLOduration=123.782724226 podStartE2EDuration="2m3.782724226s" podCreationTimestamp="2025-11-23 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:14.776003474 +0000 UTC m=+146.854101621" watchObservedRunningTime="2025-11-23 00:09:14.782724226 +0000 UTC m=+146.860822353" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.792948 4743 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zgddj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.793004 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.814438 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.814781 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.314744959 +0000 UTC m=+147.392843086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.815038 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.815760 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.315745223 +0000 UTC m=+147.393843350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.820763 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" podStartSLOduration=124.820736553 podStartE2EDuration="2m4.820736553s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:14.8131367 +0000 UTC m=+146.891234847" watchObservedRunningTime="2025-11-23 00:09:14.820736553 +0000 UTC m=+146.898834680" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833406 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" event={"ID":"6e63d320-241c-4f1e-ace2-6b28a8d9d338","Type":"ContainerStarted","Data":"bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833464 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833479 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x99kl"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833510 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfr5p" event={"ID":"a529fd56-b206-4ec0-984e-addbd17374ee","Type":"ContainerStarted","Data":"8b46031a580b041f2fd1aaca9e9437b9de93673d25c1781f7d80e5d31bbaf61b"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833523 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" event={"ID":"c9260cd3-3e10-47fe-b6f9-806bc90621fd","Type":"ContainerStarted","Data":"cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833535 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833549 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" event={"ID":"90e048af-50eb-4557-83e1-e19979685ded","Type":"ContainerStarted","Data":"f9a38e4a06a597e2716db627e5f97a84bc6724de41312bdd00e18ab830b953ef"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833564 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833574 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833585 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" event={"ID":"100014ec-26b2-4311-82fd-41fa1228c011","Type":"ContainerStarted","Data":"3f74eafd70a4af01b7efd17e3a359d6ba1baea68722dee389fa64c15d0bc7b4f"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833595 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" event={"ID":"d36d9643-d39a-480a-8caa-2a318102ef5b","Type":"ContainerStarted","Data":"f78566cef156cf9e539e0cd64e4b388fbe5e640db739eb1291a10c35d93b1f41"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.833607 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l6msj"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.851315 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v45wp" podStartSLOduration=124.85129414 podStartE2EDuration="2m4.85129414s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:14.850691736 +0000 UTC m=+146.928789863" watchObservedRunningTime="2025-11-23 00:09:14.85129414 +0000 UTC m=+146.929392267" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.861470 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" event={"ID":"bf45d2c0-250b-4c0f-8fe4-4eee3618a17e","Type":"ContainerStarted","Data":"93ec2d2e8632e10f0186b9038971c4ab4d3b0fcdc8ecadb7697f777e6c45d52a"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.878859 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" event={"ID":"b55e0a41-c894-4560-ae81-513ecb867548","Type":"ContainerStarted","Data":"b86ec43d7a8b2271da2465344e5c2a2564c8e78654a7a5c82080cc331a749e11"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.908253 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pc54n"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.933784 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.934192 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4xqck" podStartSLOduration=124.933983335 podStartE2EDuration="2m4.933983335s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:14.887541725 +0000 UTC m=+146.965639852" watchObservedRunningTime="2025-11-23 00:09:14.933983335 +0000 UTC m=+147.012081482" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.934946 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.936338 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jjr84" event={"ID":"10760937-904d-4004-837d-66e5e3dfe95f","Type":"ContainerStarted","Data":"591c322b955edcd3bf5a481a348e169efd4ef1c63082c9fbb8848cf867a712a3"} Nov 23 00:09:14 crc kubenswrapper[4743]: E1123 00:09:14.936680 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.436636949 +0000 UTC m=+147.514735076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.937844 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.942947 4743 patch_prober.go:28] interesting pod/console-operator-58897d9998-jjr84 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.942996 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jjr84" podUID="10760937-904d-4004-837d-66e5e3dfe95f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.957968 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.958050 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" event={"ID":"af4098ab-953c-4329-bec5-ec5a44f01f8e","Type":"ContainerStarted","Data":"a32e0322bb93ee1c27afc55e27a4a4d3afdde54bf568bda805d6602d0bd2faa3"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.962322 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rr7k6"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.969964 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" event={"ID":"ff4a033a-c60f-4a28-8980-9bcbbdd88ba7","Type":"ContainerStarted","Data":"2625dc5466ad70678bd8c8ce2e806648bcb65d98b135e1f59d7acbd136701730"} Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.973696 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g77gl"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.981106 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jjr84" podStartSLOduration=124.981076841 podStartE2EDuration="2m4.981076841s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:14.958953747 +0000 UTC m=+147.037051894" watchObservedRunningTime="2025-11-23 00:09:14.981076841 +0000 UTC m=+147.059174968" Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.984830 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-64f58"] Nov 23 00:09:14 crc kubenswrapper[4743]: I1123 00:09:14.991600 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" event={"ID":"a1a67a24-9440-4409-98d2-3ddbc8dda335","Type":"ContainerStarted","Data":"0eb17399b0b91a9780bd00ce7c1171110c980ccf2ad838dd337e701ba37915f3"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.003171 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" event={"ID":"d70cc2b9-d502-42fb-9f99-9473b57b2293","Type":"ContainerStarted","Data":"7e4cf0d2998ec45416c5437cf7cd880e1321fbf560c43d1fd675aab437a75dd3"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.009384 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9"] Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.011070 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tdbkj" podStartSLOduration=125.011045004 podStartE2EDuration="2m5.011045004s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:15.00882466 +0000 UTC m=+147.086922797" watchObservedRunningTime="2025-11-23 00:09:15.011045004 +0000 UTC m=+147.089143131" Nov 23 00:09:15 crc kubenswrapper[4743]: W1123 00:09:15.021817 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9135b1_ff1e_460b_8c71_84b1f15317fa.slice/crio-23193a8781c8126b260675bc3331841ffc67d9ea600b8b02233bd30264a3370e WatchSource:0}: Error finding container 23193a8781c8126b260675bc3331841ffc67d9ea600b8b02233bd30264a3370e: Status 404 returned error can't find the container with id 23193a8781c8126b260675bc3331841ffc67d9ea600b8b02233bd30264a3370e Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.022514 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29397600-gj2zp" event={"ID":"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae","Type":"ContainerStarted","Data":"24aca8838ad4fac1e38ad81127b442c63866e5fec0f1c295af85130086e18e17"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.033392 4743 generic.go:334] "Generic (PLEG): container finished" podID="203f4e5b-490a-43cb-90db-8beed3234d54" containerID="84d331f061ab856dd17bce535f901038f5b3d8c4145b40efeda1aafc9fa2f9f6" exitCode=0 Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.033468 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" event={"ID":"203f4e5b-490a-43cb-90db-8beed3234d54","Type":"ContainerDied","Data":"84d331f061ab856dd17bce535f901038f5b3d8c4145b40efeda1aafc9fa2f9f6"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.037140 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.038751 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.538721401 +0000 UTC m=+147.616819758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.039991 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v7tpd" event={"ID":"693544fb-054a-4b31-92a4-c1f89a7ee729","Type":"ContainerStarted","Data":"d0ace86cc2a7833eb72a54079adc9cbd045589e7b7440062f5d50cb0df710aed"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.040651 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29397600-gj2zp" podStartSLOduration=125.040639767 podStartE2EDuration="2m5.040639767s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:15.040134675 +0000 UTC m=+147.118232822" watchObservedRunningTime="2025-11-23 00:09:15.040639767 +0000 UTC m=+147.118737884" Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.043901 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" event={"ID":"3b9b1f5e-0438-465c-8c1d-a68f26ed23db","Type":"ContainerStarted","Data":"fe0491127d8f6633cc9d160914e3e46b1ec2f6515a8fb20b60326cc2ba9dc73f"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.049993 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" event={"ID":"e6309be1-c0c3-4a38-9770-85295aec41ae","Type":"ContainerStarted","Data":"ceb0a0179243afc6b2d94dfa90d144cf08d94c238203bbcfebf5820d105b7b21"} Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.050323 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:15 crc kubenswrapper[4743]: W1123 00:09:15.059797 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbea571d4_5d83_4f27_93c0_38ca408f0841.slice/crio-9f7d8f804d89bf253a70376bdf94fa3cc2b1fba9778d644936774951037401fb WatchSource:0}: Error finding container 9f7d8f804d89bf253a70376bdf94fa3cc2b1fba9778d644936774951037401fb: Status 404 returned error can't find the container with id 9f7d8f804d89bf253a70376bdf94fa3cc2b1fba9778d644936774951037401fb Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.063672 4743 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7pqx6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.063739 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.117250 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7fd46" podStartSLOduration=125.117226645 podStartE2EDuration="2m5.117226645s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:15.090995162 +0000 UTC m=+147.169093289" watchObservedRunningTime="2025-11-23 00:09:15.117226645 +0000 UTC m=+147.195324772" Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.118225 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" podStartSLOduration=125.118218219 podStartE2EDuration="2m5.118218219s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:15.116674691 +0000 UTC m=+147.194772838" watchObservedRunningTime="2025-11-23 00:09:15.118218219 +0000 UTC m=+147.196316346" Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.138182 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.138338 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.638305063 +0000 UTC m=+147.716403190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.140320 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.140635 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.640627039 +0000 UTC m=+147.718725166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.242619 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.242870 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.742827904 +0000 UTC m=+147.820926031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.243273 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.243648 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.743634054 +0000 UTC m=+147.821732181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.344358 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.344780 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.844758033 +0000 UTC m=+147.922856160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.445970 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.446451 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:15.946435845 +0000 UTC m=+148.024533972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.546832 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.547007 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.04696367 +0000 UTC m=+148.125061807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.547591 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.547954 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.047939274 +0000 UTC m=+148.126037401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.648938 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.649349 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.149328079 +0000 UTC m=+148.227426206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.750802 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.751294 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.251276938 +0000 UTC m=+148.329375065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.853666 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.854865 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.354836936 +0000 UTC m=+148.432935063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:15 crc kubenswrapper[4743]: I1123 00:09:15.956019 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:15 crc kubenswrapper[4743]: E1123 00:09:15.956467 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.456440007 +0000 UTC m=+148.534538124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.059097 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.059448 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.559429911 +0000 UTC m=+148.637528038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.063137 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" event={"ID":"261cf90a-79d7-4b54-9019-7a25dc991ec7","Type":"ContainerStarted","Data":"af73c9d9d1c9deb29d85d349b559179b1284b80ed7d50b9c2fce84b6bb71d2c0"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.073833 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" event={"ID":"b55e0a41-c894-4560-ae81-513ecb867548","Type":"ContainerStarted","Data":"2d6e20c516ee8863960391fb95b6e4bffd048c78bb6824e7d5af9ab49ba686cb"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.075801 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" event={"ID":"ec467b25-b224-4334-a46b-9f599c60138f","Type":"ContainerStarted","Data":"d5415e69c2f12fccb02734e6931ac4749d55a9db948560451b1860530fadc5de"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.090362 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" event={"ID":"3672dfeb-7ed6-4281-bd84-7588c3df430a","Type":"ContainerStarted","Data":"834afb5f53da57dba0bcc25211e9c4d43bcc7b3adffe5fab2617c6d1bddcd13a"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.106245 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" event={"ID":"0058ec17-3783-415d-8d1a-dee576ffa3a3","Type":"ContainerStarted","Data":"786ebfbc9a3901c6f60a2e27cb91cf4cc99ef3dc941e4cc67372b5bb7ac68619"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.106308 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" event={"ID":"0058ec17-3783-415d-8d1a-dee576ffa3a3","Type":"ContainerStarted","Data":"5bb550e88d479c5132b7860675cfaa39b686fdf18284e934d4bb4207e7c328cb"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.108086 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" event={"ID":"bfb3cd0c-631e-4904-ad6c-bd2393d94c46","Type":"ContainerStarted","Data":"584ae855228dbf3b061be772bc7bbb334156d3463f980119d71f93952e60a455"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.108947 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" event={"ID":"963d8537-d384-4feb-a776-da74096c0884","Type":"ContainerStarted","Data":"447514af9ba686c35b0836b96b3ddfa1b266195ab7a5c6198d40d9604c9320d2"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.109898 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" event={"ID":"7c5f1f39-34be-4d98-af43-81b98c87f775","Type":"ContainerStarted","Data":"db092a33af89652397fc70ded3f38fe2e04b6b5c9d6455a4d3a0b905a63b5bc9"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.112289 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" event={"ID":"d2d13e04-dd95-4e42-97da-6a9ff04fd687","Type":"ContainerStarted","Data":"e6211818c4e8000fcb3082d6e45cf99a5065252e4d070e50ecab95a8b27ec6ea"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.113814 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" event={"ID":"d7de29f4-885a-469d-843e-3762c81f5379","Type":"ContainerStarted","Data":"53529622c851bdcd09440a99a47e4b14c17c209ab3354313bf35e98f4670d36c"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.115874 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqlrm" podStartSLOduration=126.115862312 podStartE2EDuration="2m6.115862312s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:16.114801267 +0000 UTC m=+148.192899384" watchObservedRunningTime="2025-11-23 00:09:16.115862312 +0000 UTC m=+148.193960439" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.145313 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" event={"ID":"187ddc60-f070-4386-a8f8-b2ae8fd2ed08","Type":"ContainerStarted","Data":"8b7a44d4f902cd62e3265d63ffdf0455ddd68f186534a3e7be97ad1775a08a96"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.149444 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" event={"ID":"8ab03a1e-bf7f-4ad0-89da-d129b78994e0","Type":"ContainerStarted","Data":"035ff343365a554db2b0aae2b975c4f077cf65e7739f9279c0d60922da9c77d7"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.151230 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" event={"ID":"76c94f30-89a4-408d-8168-a49eb3869a39","Type":"ContainerStarted","Data":"7ad80ff86b0bf9a48806fa8540b8b35349e54ed21f2f21f8105aaf1a883da558"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.160626 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.161263 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.661250957 +0000 UTC m=+148.739349084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.167439 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g77gl" event={"ID":"bea571d4-5d83-4f27-93c0-38ca408f0841","Type":"ContainerStarted","Data":"9f7d8f804d89bf253a70376bdf94fa3cc2b1fba9778d644936774951037401fb"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.180770 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k4dzd" event={"ID":"fccac410-c6c3-454f-938c-64beeb04e317","Type":"ContainerStarted","Data":"7734f33a70c5083ad94fada24a5afffd7c182499087b5f36486f1bd96b6b0880"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.194672 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" event={"ID":"be3fa480-324b-4a69-a052-2a196e8daad3","Type":"ContainerStarted","Data":"b47e61bc2fa28bb55bdea440c904300c4945d69fef2fb4075a642f8cb2945c70"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.207942 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rdgvc" podStartSLOduration=126.207919683 podStartE2EDuration="2m6.207919683s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:16.205936275 +0000 UTC m=+148.284034412" watchObservedRunningTime="2025-11-23 00:09:16.207919683 +0000 UTC m=+148.286017810" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.207982 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" event={"ID":"ab63bc02-007b-4a61-9355-7475eb5f4db2","Type":"ContainerStarted","Data":"934ed8596b5b2fc326b710458a337e88c047b632c79eef96b41d0579662da2f9"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.209277 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fkpf9" podStartSLOduration=126.209269465 podStartE2EDuration="2m6.209269465s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:16.173464562 +0000 UTC m=+148.251562689" watchObservedRunningTime="2025-11-23 00:09:16.209269465 +0000 UTC m=+148.287367592" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.234628 4743 generic.go:334] "Generic (PLEG): container finished" podID="42f92a20-3051-4cc0-861c-5b6a58753aaf" containerID="7a2f5d52e192055aeece80594dcd30f53793f8ab58b1d2bc46fa65c48906a385" exitCode=0 Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.234734 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" event={"ID":"42f92a20-3051-4cc0-861c-5b6a58753aaf","Type":"ContainerDied","Data":"7a2f5d52e192055aeece80594dcd30f53793f8ab58b1d2bc46fa65c48906a385"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.240898 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sqgm6" event={"ID":"e59f68b6-cb09-4c13-acc5-eb4b713711da","Type":"ContainerStarted","Data":"7d56b2b38028cd8535b12c02eaab7481b33845915eba0a6b585e66d9f3985bfb"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.251480 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" event={"ID":"3b9b1f5e-0438-465c-8c1d-a68f26ed23db","Type":"ContainerStarted","Data":"5497c757399ad5c3ad1c0326b7a6f0b93c6caa69077363065c90140d357510a2"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.258963 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" event={"ID":"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022","Type":"ContainerStarted","Data":"57f02549c4a1d7e5eec49bb77a96eff75a86eb919ee6232cff051293625c722a"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.259402 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-k4dzd" podStartSLOduration=126.259382174 podStartE2EDuration="2m6.259382174s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:16.232318591 +0000 UTC m=+148.310416728" watchObservedRunningTime="2025-11-23 00:09:16.259382174 +0000 UTC m=+148.337480301" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.262208 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.263537 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" event={"ID":"3e4abcae-ad51-404d-b5e9-e0d87c08b639","Type":"ContainerStarted","Data":"d8d094215cdbea9f5886bc8206f1336c448cf3bc600dbd09b3637206b4898a89"} Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.263597 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.763569385 +0000 UTC m=+148.841667512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.291174 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" event={"ID":"5c143775-a872-437d-874a-1f30df4361b4","Type":"ContainerStarted","Data":"5325598b66571f1681332a6a6e31b81b4786c326a750fc600bfef949639183a5"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.302741 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" event={"ID":"100014ec-26b2-4311-82fd-41fa1228c011","Type":"ContainerStarted","Data":"0ca8e3ae42f0ee3caae13b4899654fb8c6c3691b12fb83426a56e0a22c8599d0"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.304753 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pc54n" event={"ID":"0b823bb3-b726-4881-9529-b40a3847704b","Type":"ContainerStarted","Data":"d516e04d1eb3a6c592854ec19a5197387be84c3dee3e3be27000b4922aa768c0"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.320648 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" event={"ID":"ac9135b1-ff1e-460b-8c71-84b1f15317fa","Type":"ContainerStarted","Data":"23193a8781c8126b260675bc3331841ffc67d9ea600b8b02233bd30264a3370e"} Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.321189 4743 patch_prober.go:28] interesting pod/console-operator-58897d9998-jjr84 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.321229 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jjr84" podUID="10760937-904d-4004-837d-66e5e3dfe95f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.321558 4743 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7pqx6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.321594 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.321693 4743 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zgddj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.321743 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.326941 4743 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-52h52 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.326998 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.346021 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sqgm6" podStartSLOduration=126.346000823 podStartE2EDuration="2m6.346000823s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:16.284846638 +0000 UTC m=+148.362944775" watchObservedRunningTime="2025-11-23 00:09:16.346000823 +0000 UTC m=+148.424098950" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.372608 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.375969 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.875953196 +0000 UTC m=+148.954051333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.474164 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.475647 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:16.975605388 +0000 UTC m=+149.053703515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.577019 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.577612 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.077583258 +0000 UTC m=+149.155681445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.679157 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.679679 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.17965171 +0000 UTC m=+149.257749837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.685584 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.687731 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.687789 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.781318 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.781905 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.281878636 +0000 UTC m=+149.359976823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.882697 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.882901 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.382864031 +0000 UTC m=+149.460962158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.883074 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.883631 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.383610219 +0000 UTC m=+149.461708346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.984282 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.984436 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.484413281 +0000 UTC m=+149.562511408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:16 crc kubenswrapper[4743]: I1123 00:09:16.984547 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:16 crc kubenswrapper[4743]: E1123 00:09:16.984974 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.484964164 +0000 UTC m=+149.563062291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.085586 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.085852 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.585813307 +0000 UTC m=+149.663911484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.085931 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.086407 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.586391961 +0000 UTC m=+149.664490078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.187751 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.188275 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.688237107 +0000 UTC m=+149.766335234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.289638 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.290155 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.790123555 +0000 UTC m=+149.868221682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.327233 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" event={"ID":"be3fa480-324b-4a69-a052-2a196e8daad3","Type":"ContainerStarted","Data":"02fa4386bb5b078b0e06e6dbff6f013246b61ca5ff97e7f7709ef703de459141"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.330139 4743 generic.go:334] "Generic (PLEG): container finished" podID="b55e0a41-c894-4560-ae81-513ecb867548" containerID="2d6e20c516ee8863960391fb95b6e4bffd048c78bb6824e7d5af9ab49ba686cb" exitCode=0 Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.330226 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" event={"ID":"b55e0a41-c894-4560-ae81-513ecb867548","Type":"ContainerDied","Data":"2d6e20c516ee8863960391fb95b6e4bffd048c78bb6824e7d5af9ab49ba686cb"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.331933 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" event={"ID":"7c5f1f39-34be-4d98-af43-81b98c87f775","Type":"ContainerStarted","Data":"afef6cdc3b95da84b78d2e6f5355e5233b32e133f0e99037f9442e79b6b8a3e2"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.332674 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.333954 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" event={"ID":"ec467b25-b224-4334-a46b-9f599c60138f","Type":"ContainerStarted","Data":"5a762ce9396ca0d6c83598b0b2f0e45cef412de17595a33663d4d2551d920c8e"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.334003 4743 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4hg6z container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.334074 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" podUID="7c5f1f39-34be-4d98-af43-81b98c87f775" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.335020 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" event={"ID":"ff4a033a-c60f-4a28-8980-9bcbbdd88ba7","Type":"ContainerStarted","Data":"f11635afcdd62703751ad84aea605f47fa5237f28bd36422e1eeef1027548774"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.336452 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" event={"ID":"42f92a20-3051-4cc0-861c-5b6a58753aaf","Type":"ContainerStarted","Data":"79ab92cb7102f726f7065c73c2372bf6ee2e51ca02914655833819a280af1b35"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.337604 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v7tpd" event={"ID":"693544fb-054a-4b31-92a4-c1f89a7ee729","Type":"ContainerStarted","Data":"aadaee141a0f2dc56dacd61ffa1ee976d23587900ce4791ec3475f7ad91d72ef"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.339508 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" event={"ID":"d70cc2b9-d502-42fb-9f99-9473b57b2293","Type":"ContainerStarted","Data":"a951dd0aeb10006ea3e1e2512c94ffc6c0300de00e0d26d4e198366009128687"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.339808 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.340608 4743 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b8wg9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.340662 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" podUID="d70cc2b9-d502-42fb-9f99-9473b57b2293" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.340784 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" event={"ID":"ab63bc02-007b-4a61-9355-7475eb5f4db2","Type":"ContainerStarted","Data":"d8b59c0de0021172451a35c3ab4c8525b41c5449a76578ae3e483189c1d14459"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.342367 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" event={"ID":"963d8537-d384-4feb-a776-da74096c0884","Type":"ContainerStarted","Data":"ab36923f1d05128d7e08ce5c9cbf8224e46334eaeda7dea0c607cfea67db8a09"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.342745 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.343474 4743 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rr7k6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.343526 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.343865 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pc54n" event={"ID":"0b823bb3-b726-4881-9529-b40a3847704b","Type":"ContainerStarted","Data":"91de49637e35d28ffe0c367443c869df2af6277092840abbcefdfa1f5f25a3c7"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.345044 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" event={"ID":"ac9135b1-ff1e-460b-8c71-84b1f15317fa","Type":"ContainerStarted","Data":"d277c8b8549e75e3e58a14f98c0e82776c370cf6d92cef7746931ed137b2f50c"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.347891 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" event={"ID":"261cf90a-79d7-4b54-9019-7a25dc991ec7","Type":"ContainerStarted","Data":"543116a419b8a024c668797ddff1e2877a131fa8689c58fe83b6fef6cbc8669b"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.348844 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9jcb9" podStartSLOduration=127.348831781 podStartE2EDuration="2m7.348831781s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.348124324 +0000 UTC m=+149.426222461" watchObservedRunningTime="2025-11-23 00:09:17.348831781 +0000 UTC m=+149.426929908" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.349695 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dfr5p" podStartSLOduration=127.349687661 podStartE2EDuration="2m7.349687661s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:16.345700606 +0000 UTC m=+148.423798743" watchObservedRunningTime="2025-11-23 00:09:17.349687661 +0000 UTC m=+149.427785788" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.349982 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" event={"ID":"6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022","Type":"ContainerStarted","Data":"b6a4f9b602c95c974abd38aee4e9e529914d1f111e92ce5dc30381c6873aa412"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.353359 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g77gl" event={"ID":"bea571d4-5d83-4f27-93c0-38ca408f0841","Type":"ContainerStarted","Data":"3ccc623b82ca99018f58aacece0783228afe07cf97e9e39e2cbd58f9f95508f4"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.356092 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" event={"ID":"5c143775-a872-437d-874a-1f30df4361b4","Type":"ContainerStarted","Data":"575d087188606ad271d73c004f7a3e54768a93132aca745d5498b23cfd9a01e1"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.358846 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" event={"ID":"100014ec-26b2-4311-82fd-41fa1228c011","Type":"ContainerStarted","Data":"e1ae223d8fa277d4860ae00859cfeae52fae5c0cfbabcdd0b674a14ce6796a22"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.360644 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" event={"ID":"3b9b1f5e-0438-465c-8c1d-a68f26ed23db","Type":"ContainerStarted","Data":"db9159592c76888c92eed007a70bf1b9f157bb39144cf5ca9f108219107ee3d7"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.363632 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" event={"ID":"af4098ab-953c-4329-bec5-ec5a44f01f8e","Type":"ContainerStarted","Data":"5f204035807d3364da8f373747361acb7edbff280f8ca182cd8519e7bb302646"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.365623 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" event={"ID":"90e048af-50eb-4557-83e1-e19979685ded","Type":"ContainerStarted","Data":"55f2133fac9b70c0d048f9242294d6883c2df20f234df059cec1698115e27cc8"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.365798 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.366915 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" event={"ID":"d2d13e04-dd95-4e42-97da-6a9ff04fd687","Type":"ContainerStarted","Data":"37e2dc0894d1341f9d46c455d95916ddb03901c486fb347493027897ca307123"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.368853 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" event={"ID":"203f4e5b-490a-43cb-90db-8beed3234d54","Type":"ContainerStarted","Data":"be9a4bf17ec2e64c93c453abaf88ca6e6ef9ce42e0396abfb35105e2a4d8e231"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.372146 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" event={"ID":"3e4abcae-ad51-404d-b5e9-e0d87c08b639","Type":"ContainerStarted","Data":"fda04e18f11ce19565a2536a5c4480cc3d638fcf45ffc41a6fca19d782feabb3"} Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.372197 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.373729 4743 patch_prober.go:28] interesting pod/console-operator-58897d9998-jjr84 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.373782 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jjr84" podUID="10760937-904d-4004-837d-66e5e3dfe95f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.374047 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.374886 4743 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5mslk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.374970 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" podUID="0058ec17-3783-415d-8d1a-dee576ffa3a3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.377171 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pc54n" podStartSLOduration=8.377160054 podStartE2EDuration="8.377160054s" podCreationTimestamp="2025-11-23 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.375960765 +0000 UTC m=+149.454058912" watchObservedRunningTime="2025-11-23 00:09:17.377160054 +0000 UTC m=+149.455258181" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.380842 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.380979 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.390806 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.391422 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.891384917 +0000 UTC m=+149.969483044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.428754 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hw6zj" podStartSLOduration=127.428726658 podStartE2EDuration="2m7.428726658s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.425064539 +0000 UTC m=+149.503162666" watchObservedRunningTime="2025-11-23 00:09:17.428726658 +0000 UTC m=+149.506824785" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.445690 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" podStartSLOduration=127.445665436 podStartE2EDuration="2m7.445665436s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.443504544 +0000 UTC m=+149.521602681" watchObservedRunningTime="2025-11-23 00:09:17.445665436 +0000 UTC m=+149.523763563" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.461954 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podStartSLOduration=127.461928419 podStartE2EDuration="2m7.461928419s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.46076003 +0000 UTC m=+149.538858167" watchObservedRunningTime="2025-11-23 00:09:17.461928419 +0000 UTC m=+149.540026546" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.475518 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" podStartSLOduration=127.475492346 podStartE2EDuration="2m7.475492346s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.473096788 +0000 UTC m=+149.551194925" watchObservedRunningTime="2025-11-23 00:09:17.475492346 +0000 UTC m=+149.553590463" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.489120 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" podStartSLOduration=127.48909561400001 podStartE2EDuration="2m7.489095614s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.487595338 +0000 UTC m=+149.565693475" watchObservedRunningTime="2025-11-23 00:09:17.489095614 +0000 UTC m=+149.567193751" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.492956 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.493999 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:17.993975312 +0000 UTC m=+150.072073439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.505858 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-v7tpd" podStartSLOduration=8.505831267 podStartE2EDuration="8.505831267s" podCreationTimestamp="2025-11-23 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.505403687 +0000 UTC m=+149.583501834" watchObservedRunningTime="2025-11-23 00:09:17.505831267 +0000 UTC m=+149.583929394" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.553631 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-njxkk" podStartSLOduration=127.55360429 podStartE2EDuration="2m7.55360429s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.529804166 +0000 UTC m=+149.607902303" watchObservedRunningTime="2025-11-23 00:09:17.55360429 +0000 UTC m=+149.631702417" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.578679 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6nm8l" podStartSLOduration=127.578652244 podStartE2EDuration="2m7.578652244s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.555721701 +0000 UTC m=+149.633819828" watchObservedRunningTime="2025-11-23 00:09:17.578652244 +0000 UTC m=+149.656750381" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.594474 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.595019 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.094995058 +0000 UTC m=+150.173093185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.600968 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26wwd" podStartSLOduration=127.600946692 podStartE2EDuration="2m7.600946692s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.582946728 +0000 UTC m=+149.661044875" watchObservedRunningTime="2025-11-23 00:09:17.600946692 +0000 UTC m=+149.679044819" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.602731 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" podStartSLOduration=127.602725565 podStartE2EDuration="2m7.602725565s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.600660125 +0000 UTC m=+149.678758262" watchObservedRunningTime="2025-11-23 00:09:17.602725565 +0000 UTC m=+149.680823692" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.616133 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q85t8" podStartSLOduration=126.616095157 podStartE2EDuration="2m6.616095157s" podCreationTimestamp="2025-11-23 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.614683993 +0000 UTC m=+149.692782140" watchObservedRunningTime="2025-11-23 00:09:17.616095157 +0000 UTC m=+149.694193274" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.653656 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" podStartSLOduration=127.653611132 podStartE2EDuration="2m7.653611132s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.635172087 +0000 UTC m=+149.713270224" watchObservedRunningTime="2025-11-23 00:09:17.653611132 +0000 UTC m=+149.731709259" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.692505 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.692578 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.697167 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.697639 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.197624044 +0000 UTC m=+150.275722171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.798458 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.798697 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.29865436 +0000 UTC m=+150.376752487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.798777 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.800946 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.300926225 +0000 UTC m=+150.379024352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.900105 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.900351 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.400295052 +0000 UTC m=+150.478393179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:17 crc kubenswrapper[4743]: I1123 00:09:17.900713 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:17 crc kubenswrapper[4743]: E1123 00:09:17.901158 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.401141242 +0000 UTC m=+150.479239369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.002556 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.007743 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.507712893 +0000 UTC m=+150.585811020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.007904 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.008377 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.508354668 +0000 UTC m=+150.586452795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.109141 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.109435 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.609403856 +0000 UTC m=+150.687501983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.112094 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.112474 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.61246193 +0000 UTC m=+150.690560057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.213616 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.213890 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.713840995 +0000 UTC m=+150.791939122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.214104 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.214479 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.71446444 +0000 UTC m=+150.792562567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.315941 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.316312 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.816268755 +0000 UTC m=+150.894366872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.316544 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.316950 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.816932611 +0000 UTC m=+150.895030738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.379055 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g77gl" event={"ID":"bea571d4-5d83-4f27-93c0-38ca408f0841","Type":"ContainerStarted","Data":"fdb214648fbbe016319120028f9d3e267959f74b21dedada57c311dd0cb51fdc"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.381767 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" event={"ID":"b55e0a41-c894-4560-ae81-513ecb867548","Type":"ContainerStarted","Data":"b8c7345cb2b8bd5db257edb10762c4a256a41fb34272b5df2fe4fb3113946cf4"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.381988 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.384379 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" event={"ID":"ec467b25-b224-4334-a46b-9f599c60138f","Type":"ContainerStarted","Data":"c2c4df39efceb127736e4dd3731d57f77fe2909617ae2aaa1918cd5559a067d6"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.386205 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" event={"ID":"ff4a033a-c60f-4a28-8980-9bcbbdd88ba7","Type":"ContainerStarted","Data":"d7775f787e7c47865c020d16ef9557b82861e894c72d319e9983d739551407b1"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.388467 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" event={"ID":"203f4e5b-490a-43cb-90db-8beed3234d54","Type":"ContainerStarted","Data":"f03fb354f35e0cc09c1d61f9e227c1ef0a8709aa2d263331a55feb9e026a45e8"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.390335 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" event={"ID":"261cf90a-79d7-4b54-9019-7a25dc991ec7","Type":"ContainerStarted","Data":"7f5108bb2b557beeef3be8d6a4358ee68a4806f68d8a3d8b7c58e8d00676b775"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.391860 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" event={"ID":"af4098ab-953c-4329-bec5-ec5a44f01f8e","Type":"ContainerStarted","Data":"efa6b10e9f516a93265ad4afd1a8066b51f03b04646b7960dccf8dac015d8642"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394053 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" event={"ID":"3e4abcae-ad51-404d-b5e9-e0d87c08b639","Type":"ContainerStarted","Data":"32c692a9006f9fed3615c19ec133af2bd131b211607f7e6ed3f262a9f5ef156d"} Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394764 4743 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5mslk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394801 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" podUID="0058ec17-3783-415d-8d1a-dee576ffa3a3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394764 4743 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4hg6z container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394881 4743 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b8wg9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394889 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" podUID="7c5f1f39-34be-4d98-af43-81b98c87f775" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.394906 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" podUID="d70cc2b9-d502-42fb-9f99-9473b57b2293" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.395006 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.395051 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.395168 4743 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rr7k6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.395196 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.404067 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" podStartSLOduration=128.404042043 podStartE2EDuration="2m8.404042043s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.402200898 +0000 UTC m=+150.480299045" watchObservedRunningTime="2025-11-23 00:09:18.404042043 +0000 UTC m=+150.482140190" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.405929 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-x99kl" podStartSLOduration=127.405921078 podStartE2EDuration="2m7.405921078s" podCreationTimestamp="2025-11-23 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:17.653338665 +0000 UTC m=+149.731436792" watchObservedRunningTime="2025-11-23 00:09:18.405921078 +0000 UTC m=+150.484019205" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.418317 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.418689 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:18.918667915 +0000 UTC m=+150.996766042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.457324 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hmpwd" podStartSLOduration=128.451769094 podStartE2EDuration="2m8.451769094s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.423987964 +0000 UTC m=+150.502086101" watchObservedRunningTime="2025-11-23 00:09:18.451769094 +0000 UTC m=+150.529867221" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.476670 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4vhsd" podStartSLOduration=128.476642514 podStartE2EDuration="2m8.476642514s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.45159896 +0000 UTC m=+150.529697107" watchObservedRunningTime="2025-11-23 00:09:18.476642514 +0000 UTC m=+150.554740641" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.478198 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" podStartSLOduration=128.478190381 podStartE2EDuration="2m8.478190381s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.475164108 +0000 UTC m=+150.553262255" watchObservedRunningTime="2025-11-23 00:09:18.478190381 +0000 UTC m=+150.556288518" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.494616 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-64f58" podStartSLOduration=128.494590657 podStartE2EDuration="2m8.494590657s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.49430735 +0000 UTC m=+150.572405497" watchObservedRunningTime="2025-11-23 00:09:18.494590657 +0000 UTC m=+150.572688784" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.523128 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.539286 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.039263134 +0000 UTC m=+151.117361261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.564010 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdh94" podStartSLOduration=128.56398114 podStartE2EDuration="2m8.56398114s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.517771596 +0000 UTC m=+150.595869743" watchObservedRunningTime="2025-11-23 00:09:18.56398114 +0000 UTC m=+150.642079267" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.577174 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mdb2v" podStartSLOduration=128.577151508 podStartE2EDuration="2m8.577151508s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.575220431 +0000 UTC m=+150.653318568" watchObservedRunningTime="2025-11-23 00:09:18.577151508 +0000 UTC m=+150.655249625" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.578013 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2pjp2" podStartSLOduration=128.578008439 podStartE2EDuration="2m8.578008439s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.552849302 +0000 UTC m=+150.630947439" watchObservedRunningTime="2025-11-23 00:09:18.578008439 +0000 UTC m=+150.656106566" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.627944 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vl776" podStartSLOduration=128.627915722 podStartE2EDuration="2m8.627915722s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.600194834 +0000 UTC m=+150.678292981" watchObservedRunningTime="2025-11-23 00:09:18.627915722 +0000 UTC m=+150.706013849" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.630445 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.630728 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.130675259 +0000 UTC m=+151.208773386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.631035 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.631527 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.131515459 +0000 UTC m=+151.209613586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.686460 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.686545 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.732252 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.733312 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.233287824 +0000 UTC m=+151.311385951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.746914 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" podStartSLOduration=127.746872912 podStartE2EDuration="2m7.746872912s" podCreationTimestamp="2025-11-23 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:18.629294086 +0000 UTC m=+150.707392213" watchObservedRunningTime="2025-11-23 00:09:18.746872912 +0000 UTC m=+150.824971039" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.834795 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.834849 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.834907 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.834936 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.834964 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.835301 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.335286264 +0000 UTC m=+151.413384391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.835907 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.840906 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.841118 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.846088 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.848289 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:18 crc kubenswrapper[4743]: I1123 00:09:18.935348 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:18 crc kubenswrapper[4743]: E1123 00:09:18.935682 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.435660495 +0000 UTC m=+151.513758622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.038502 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.039162 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.539141971 +0000 UTC m=+151.617240098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.066538 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 00:09:19 crc kubenswrapper[4743]: W1123 00:09:19.113774 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c8002a7dc4828103abac4dac4181f4a189ee27c106b560232b632f273fa7bd72 WatchSource:0}: Error finding container c8002a7dc4828103abac4dac4181f4a189ee27c106b560232b632f273fa7bd72: Status 404 returned error can't find the container with id c8002a7dc4828103abac4dac4181f4a189ee27c106b560232b632f273fa7bd72 Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.141941 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.142334 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.142398 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.642358701 +0000 UTC m=+151.720456868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.245360 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.245849 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.745829057 +0000 UTC m=+151.823927184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.346954 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.347721 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.847700464 +0000 UTC m=+151.925798591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.428771 4743 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rr7k6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.428830 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.429322 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c8002a7dc4828103abac4dac4181f4a189ee27c106b560232b632f273fa7bd72"} Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.430299 4743 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4hg6z container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.430353 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" podUID="7c5f1f39-34be-4d98-af43-81b98c87f775" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.452540 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.456616 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:19.95658978 +0000 UTC m=+152.034687907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.476696 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-g77gl" podStartSLOduration=10.476672964 podStartE2EDuration="10.476672964s" podCreationTimestamp="2025-11-23 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:19.476245424 +0000 UTC m=+151.554343571" watchObservedRunningTime="2025-11-23 00:09:19.476672964 +0000 UTC m=+151.554771081" Nov 23 00:09:19 crc kubenswrapper[4743]: W1123 00:09:19.538216 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-772402ab06286bf44abe6d17285b88cdfdc90a026cb0c54ce4a6fe2cc4063db4 WatchSource:0}: Error finding container 772402ab06286bf44abe6d17285b88cdfdc90a026cb0c54ce4a6fe2cc4063db4: Status 404 returned error can't find the container with id 772402ab06286bf44abe6d17285b88cdfdc90a026cb0c54ce4a6fe2cc4063db4 Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.554244 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.554667 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.054647405 +0000 UTC m=+152.132745532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: W1123 00:09:19.599641 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-34f7422fa9604d3e8d553705b483c440ef664959b1777451f1321c3c86585c6c WatchSource:0}: Error finding container 34f7422fa9604d3e8d553705b483c440ef664959b1777451f1321c3c86585c6c: Status 404 returned error can't find the container with id 34f7422fa9604d3e8d553705b483c440ef664959b1777451f1321c3c86585c6c Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.655882 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.656533 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.156501962 +0000 UTC m=+152.234600089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.757204 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.757397 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.257363905 +0000 UTC m=+152.335462032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.757456 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.757872 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.257854257 +0000 UTC m=+152.335952384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.814738 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:19 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:19 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:19 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.814809 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.858765 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.859005 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.358964845 +0000 UTC m=+152.437062972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.859612 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.859971 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.359963659 +0000 UTC m=+152.438061776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.961317 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.961537 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.461497199 +0000 UTC m=+152.539595346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:19 crc kubenswrapper[4743]: I1123 00:09:19.963341 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:19 crc kubenswrapper[4743]: E1123 00:09:19.963752 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.463736752 +0000 UTC m=+152.541834879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.064578 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.064859 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.56481746 +0000 UTC m=+152.642915587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.065238 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.065662 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.56565295 +0000 UTC m=+152.643751077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.166520 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.166752 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.666706867 +0000 UTC m=+152.744805004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.166905 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.167303 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.667284701 +0000 UTC m=+152.745382828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.268207 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.268390 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.768354539 +0000 UTC m=+152.846452666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.268423 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.268815 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.7688068 +0000 UTC m=+152.846904927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.369732 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.370022 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.86997786 +0000 UTC m=+152.948075987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.370478 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.370819 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.87080352 +0000 UTC m=+152.948901647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.433429 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"34f7422fa9604d3e8d553705b483c440ef664959b1777451f1321c3c86585c6c"} Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.434591 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"772402ab06286bf44abe6d17285b88cdfdc90a026cb0c54ce4a6fe2cc4063db4"} Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.471628 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.471984 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.97194411 +0000 UTC m=+153.050042257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.472204 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.472723 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:20.972711158 +0000 UTC m=+153.050809295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.573352 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.573877 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.073838977 +0000 UTC m=+153.151937104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.635507 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.636553 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.639266 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.639559 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.675428 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05171497-286a-4f55-b08b-e8854eaf10a7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.675506 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.675576 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05171497-286a-4f55-b08b-e8854eaf10a7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.676090 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.176064733 +0000 UTC m=+153.254162920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.677010 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.687822 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:20 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:20 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:20 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.687920 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.776362 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.776585 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.276550077 +0000 UTC m=+153.354648214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.776786 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05171497-286a-4f55-b08b-e8854eaf10a7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.776861 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05171497-286a-4f55-b08b-e8854eaf10a7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.776915 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.777023 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05171497-286a-4f55-b08b-e8854eaf10a7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.777422 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.277402427 +0000 UTC m=+153.355500564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.800237 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05171497-286a-4f55-b08b-e8854eaf10a7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.877789 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.878057 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.378009434 +0000 UTC m=+153.456107581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.952308 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:20 crc kubenswrapper[4743]: I1123 00:09:20.979336 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:20 crc kubenswrapper[4743]: E1123 00:09:20.979773 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.479756198 +0000 UTC m=+153.557854325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.082782 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.083495 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.583451559 +0000 UTC m=+153.661549686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.184426 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.184906 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.684884636 +0000 UTC m=+153.762982763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.222674 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.286372 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.287076 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.78704794 +0000 UTC m=+153.865146067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.388197 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.389597 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.889554842 +0000 UTC m=+153.967652969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.445314 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05171497-286a-4f55-b08b-e8854eaf10a7","Type":"ContainerStarted","Data":"9470d1b8d4705f19a2a2412aad1e28280520a28ffa265754d104659794e6ef02"} Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.447130 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c9a99a70db86b78d97e4d7a3c9390e0304bc5f465658b6add587c048ade58838"} Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.447345 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.493590 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.493818 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.993767426 +0000 UTC m=+154.071865553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.494159 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.494721 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:21.994709269 +0000 UTC m=+154.072807396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.595523 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.595697 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.095670654 +0000 UTC m=+154.173768791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.595992 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.596385 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.096375241 +0000 UTC m=+154.174473368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.671007 4743 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-xgh54 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.671066 4743 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-xgh54 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.671081 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" podUID="b55e0a41-c894-4560-ae81-513ecb867548" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.671198 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" podUID="b55e0a41-c894-4560-ae81-513ecb867548" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.689653 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:21 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:21 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:21 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.689751 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.697428 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.697730 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.197676574 +0000 UTC m=+154.275774701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.697805 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.698187 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.198170806 +0000 UTC m=+154.276268933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.789415 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.799389 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.799793 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.299753406 +0000 UTC m=+154.377851533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.834429 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.834497 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.835816 4743 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7mnv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.835875 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" podUID="203f4e5b-490a-43cb-90db-8beed3234d54" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.858807 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.898416 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:09:21 crc kubenswrapper[4743]: I1123 00:09:21.900290 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:21 crc kubenswrapper[4743]: E1123 00:09:21.902387 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.402351131 +0000 UTC m=+154.480449348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.001731 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.004896 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.504849323 +0000 UTC m=+154.582947450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.013323 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jjr84" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.033459 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.033497 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.033610 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.033551 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.104629 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.105164 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.605129802 +0000 UTC m=+154.683227929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.141943 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.142000 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.146691 4743 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-gdmf8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.146762 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" podUID="42f92a20-3051-4cc0-861c-5b6a58753aaf" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.165081 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.206533 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.206820 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.706772434 +0000 UTC m=+154.784870561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.206900 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.208074 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.708051325 +0000 UTC m=+154.786149442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.308161 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.308362 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.808328563 +0000 UTC m=+154.886426690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.308497 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.308938 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.808924118 +0000 UTC m=+154.887022245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.363499 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.363558 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.366227 4743 patch_prober.go:28] interesting pod/console-f9d7485db-k4dzd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.366306 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k4dzd" podUID="fccac410-c6c3-454f-938c-64beeb04e317" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.410138 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.411846 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:22.91182565 +0000 UTC m=+154.989923777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.454366 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"845b7e8db7e6ec133d13c8cf434f31bd2d9e8891b2623ede6a2d2716599bd234"} Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.455748 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" event={"ID":"76c94f30-89a4-408d-8168-a49eb3869a39","Type":"ContainerStarted","Data":"02cefd54c2405bce3d77b2b79ad19bc62a79888253bb3e955bf21e53b099358a"} Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.457262 4743 generic.go:334] "Generic (PLEG): container finished" podID="ab63bc02-007b-4a61-9355-7475eb5f4db2" containerID="d8b59c0de0021172451a35c3ab4c8525b41c5449a76578ae3e483189c1d14459" exitCode=0 Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.457318 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" event={"ID":"ab63bc02-007b-4a61-9355-7475eb5f4db2","Type":"ContainerDied","Data":"d8b59c0de0021172451a35c3ab4c8525b41c5449a76578ae3e483189c1d14459"} Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.459005 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a54019f027e1277fa0b24e41f70c8fe21b1cdaaa8a1eb8e7d0cbeaea10868545"} Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.461164 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05171497-286a-4f55-b08b-e8854eaf10a7","Type":"ContainerStarted","Data":"c4d805222fcb731827d113bbb7d1f19ea9c5f2b58db7244efb129a598afaa46d"} Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.512414 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.512753 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.012741154 +0000 UTC m=+155.090839271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.613848 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.614122 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.114077628 +0000 UTC m=+155.192175755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.614472 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.614854 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.114845657 +0000 UTC m=+155.192943784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.685138 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.690553 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:22 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:22 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:22 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.690652 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.716152 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.716401 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.216358915 +0000 UTC m=+155.294457042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.716624 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.717186 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.217142834 +0000 UTC m=+155.295240981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.756599 4743 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rr7k6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.756671 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.757438 4743 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rr7k6 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.757513 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.817959 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.818313 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.318294664 +0000 UTC m=+155.396392791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.847109 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mslk" Nov 23 00:09:22 crc kubenswrapper[4743]: I1123 00:09:22.923354 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:22 crc kubenswrapper[4743]: E1123 00:09:22.925257 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.425242614 +0000 UTC m=+155.503340741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.024119 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.024577 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.524556519 +0000 UTC m=+155.602654646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.126292 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.126789 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.626767764 +0000 UTC m=+155.704865891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.227849 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.228046 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.728012796 +0000 UTC m=+155.806110923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.228510 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.228905 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.728891328 +0000 UTC m=+155.806989455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.230137 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hg6z" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.330243 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.331160 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.831121493 +0000 UTC m=+155.909219620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.431694 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.432167 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:23.93214927 +0000 UTC m=+156.010247397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.468063 4743 generic.go:334] "Generic (PLEG): container finished" podID="05171497-286a-4f55-b08b-e8854eaf10a7" containerID="c4d805222fcb731827d113bbb7d1f19ea9c5f2b58db7244efb129a598afaa46d" exitCode=0 Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.468967 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05171497-286a-4f55-b08b-e8854eaf10a7","Type":"ContainerDied","Data":"c4d805222fcb731827d113bbb7d1f19ea9c5f2b58db7244efb129a598afaa46d"} Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.533605 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.533825 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.033787372 +0000 UTC m=+156.111885499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.533990 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.534497 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.034459758 +0000 UTC m=+156.112557875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.635187 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.635455 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.135413232 +0000 UTC m=+156.213511349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.635538 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.635978 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.135958415 +0000 UTC m=+156.214056542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.693665 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.693733 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.708723 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:23 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:23 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:23 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.708786 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.737090 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.737611 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.237590037 +0000 UTC m=+156.315688154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.799626 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.800417 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.805393 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.805513 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.815041 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.828268 4743 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b8wg9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.828343 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" podUID="d70cc2b9-d502-42fb-9f99-9473b57b2293" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.828897 4743 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b8wg9 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.828931 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" podUID="d70cc2b9-d502-42fb-9f99-9473b57b2293" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.838545 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.838689 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/796435d5-3cf9-4caf-a033-f58fc197ba12-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.838758 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/796435d5-3cf9-4caf-a033-f58fc197ba12-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.838974 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.338950361 +0000 UTC m=+156.417048488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.850138 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940088 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940144 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab63bc02-007b-4a61-9355-7475eb5f4db2-config-volume\") pod \"ab63bc02-007b-4a61-9355-7475eb5f4db2\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940237 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtmgj\" (UniqueName: \"kubernetes.io/projected/ab63bc02-007b-4a61-9355-7475eb5f4db2-kube-api-access-rtmgj\") pod \"ab63bc02-007b-4a61-9355-7475eb5f4db2\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940269 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab63bc02-007b-4a61-9355-7475eb5f4db2-secret-volume\") pod \"ab63bc02-007b-4a61-9355-7475eb5f4db2\" (UID: \"ab63bc02-007b-4a61-9355-7475eb5f4db2\") " Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.940309 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.440269815 +0000 UTC m=+156.518367942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940467 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/796435d5-3cf9-4caf-a033-f58fc197ba12-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940540 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940823 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/796435d5-3cf9-4caf-a033-f58fc197ba12-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.940939 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/796435d5-3cf9-4caf-a033-f58fc197ba12-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:23 crc kubenswrapper[4743]: E1123 00:09:23.941231 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.441204948 +0000 UTC m=+156.519303145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.941458 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab63bc02-007b-4a61-9355-7475eb5f4db2-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab63bc02-007b-4a61-9355-7475eb5f4db2" (UID: "ab63bc02-007b-4a61-9355-7475eb5f4db2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.953175 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab63bc02-007b-4a61-9355-7475eb5f4db2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab63bc02-007b-4a61-9355-7475eb5f4db2" (UID: "ab63bc02-007b-4a61-9355-7475eb5f4db2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.954988 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab63bc02-007b-4a61-9355-7475eb5f4db2-kube-api-access-rtmgj" (OuterVolumeSpecName: "kube-api-access-rtmgj") pod "ab63bc02-007b-4a61-9355-7475eb5f4db2" (UID: "ab63bc02-007b-4a61-9355-7475eb5f4db2"). InnerVolumeSpecName "kube-api-access-rtmgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:09:23 crc kubenswrapper[4743]: I1123 00:09:23.986538 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/796435d5-3cf9-4caf-a033-f58fc197ba12-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.042585 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.042838 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.542800238 +0000 UTC m=+156.620898375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.042967 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.043109 4743 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab63bc02-007b-4a61-9355-7475eb5f4db2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.043123 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtmgj\" (UniqueName: \"kubernetes.io/projected/ab63bc02-007b-4a61-9355-7475eb5f4db2-kube-api-access-rtmgj\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.043137 4743 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab63bc02-007b-4a61-9355-7475eb5f4db2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.043360 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.543351872 +0000 UTC m=+156.621449999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.144025 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.144180 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.644145273 +0000 UTC m=+156.722243400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.144409 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.144802 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.644794388 +0000 UTC m=+156.722892515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.148122 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.245914 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.246154 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.746117232 +0000 UTC m=+156.824215359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.246226 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.246654 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.746644325 +0000 UTC m=+156.824742452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.347722 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.347903 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.847875227 +0000 UTC m=+156.925973354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.348076 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.348428 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.84841626 +0000 UTC m=+156.926514387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.450097 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.450964 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:24.950906422 +0000 UTC m=+157.029004549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.497416 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.497776 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397600-qg2pc" event={"ID":"ab63bc02-007b-4a61-9355-7475eb5f4db2","Type":"ContainerDied","Data":"934ed8596b5b2fc326b710458a337e88c047b632c79eef96b41d0579662da2f9"} Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.497929 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934ed8596b5b2fc326b710458a337e88c047b632c79eef96b41d0579662da2f9" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.498817 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-76vhw"] Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.499106 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab63bc02-007b-4a61-9355-7475eb5f4db2" containerName="collect-profiles" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.499118 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab63bc02-007b-4a61-9355-7475eb5f4db2" containerName="collect-profiles" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.499237 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab63bc02-007b-4a61-9355-7475eb5f4db2" containerName="collect-profiles" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.502130 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.507126 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.519500 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.537477 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76vhw"] Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.556305 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.556829 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8zw\" (UniqueName: \"kubernetes.io/projected/d65fc52f-316d-4e63-99f0-998c7fb04d89-kube-api-access-2s8zw\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.556977 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-utilities\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.557082 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-catalog-content\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.557678 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.057663117 +0000 UTC m=+157.135761244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.658381 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.658589 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8zw\" (UniqueName: \"kubernetes.io/projected/d65fc52f-316d-4e63-99f0-998c7fb04d89-kube-api-access-2s8zw\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.658661 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-utilities\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.658681 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-catalog-content\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.658986 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.158948 +0000 UTC m=+157.237046127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.659134 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-catalog-content\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.659348 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-utilities\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.688780 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8zw\" (UniqueName: \"kubernetes.io/projected/d65fc52f-316d-4e63-99f0-998c7fb04d89-kube-api-access-2s8zw\") pod \"community-operators-76vhw\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.689987 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fkhpx"] Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.691682 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.694396 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:24 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:24 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:24 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.694718 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.707336 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.754553 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkhpx"] Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.759965 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hlp6\" (UniqueName: \"kubernetes.io/projected/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-kube-api-access-4hlp6\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.760157 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-utilities\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.760235 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-catalog-content\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.760348 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.760766 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.260752795 +0000 UTC m=+157.338850922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.861211 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.861340 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.861406 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.361387153 +0000 UTC m=+157.439485280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.862256 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-utilities\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.862291 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-catalog-content\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.862359 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.862418 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hlp6\" (UniqueName: \"kubernetes.io/projected/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-kube-api-access-4hlp6\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.862993 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-utilities\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.863088 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-catalog-content\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.863094 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.363062553 +0000 UTC m=+157.441160680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.873661 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.890054 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxwkd"] Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.890265 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05171497-286a-4f55-b08b-e8854eaf10a7" containerName="pruner" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.890277 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="05171497-286a-4f55-b08b-e8854eaf10a7" containerName="pruner" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.890395 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="05171497-286a-4f55-b08b-e8854eaf10a7" containerName="pruner" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.892771 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.893543 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hlp6\" (UniqueName: \"kubernetes.io/projected/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-kube-api-access-4hlp6\") pod \"certified-operators-fkhpx\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.905706 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxwkd"] Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.963556 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.963702 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05171497-286a-4f55-b08b-e8854eaf10a7-kubelet-dir\") pod \"05171497-286a-4f55-b08b-e8854eaf10a7\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.963766 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05171497-286a-4f55-b08b-e8854eaf10a7-kube-api-access\") pod \"05171497-286a-4f55-b08b-e8854eaf10a7\" (UID: \"05171497-286a-4f55-b08b-e8854eaf10a7\") " Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.963940 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-utilities\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.963993 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44t6\" (UniqueName: \"kubernetes.io/projected/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-kube-api-access-v44t6\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.964053 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-catalog-content\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:24 crc kubenswrapper[4743]: E1123 00:09:24.964302 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.464277944 +0000 UTC m=+157.542376071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.964340 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05171497-286a-4f55-b08b-e8854eaf10a7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05171497-286a-4f55-b08b-e8854eaf10a7" (UID: "05171497-286a-4f55-b08b-e8854eaf10a7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:09:24 crc kubenswrapper[4743]: I1123 00:09:24.978886 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05171497-286a-4f55-b08b-e8854eaf10a7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05171497-286a-4f55-b08b-e8854eaf10a7" (UID: "05171497-286a-4f55-b08b-e8854eaf10a7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.033031 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.065386 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-utilities\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.065450 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44t6\" (UniqueName: \"kubernetes.io/projected/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-kube-api-access-v44t6\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.065513 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-catalog-content\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.065533 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.065573 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05171497-286a-4f55-b08b-e8854eaf10a7-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.065584 4743 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05171497-286a-4f55-b08b-e8854eaf10a7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.065924 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.565905896 +0000 UTC m=+157.644004023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.066431 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-utilities\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.067442 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-catalog-content\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.096682 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44t6\" (UniqueName: \"kubernetes.io/projected/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-kube-api-access-v44t6\") pod \"community-operators-wxwkd\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.098281 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qpmgn"] Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.099291 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.134637 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpmgn"] Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.166753 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.166995 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-catalog-content\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.167129 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wkl\" (UniqueName: \"kubernetes.io/projected/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-kube-api-access-z6wkl\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.167153 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-utilities\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.167310 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.667275871 +0000 UTC m=+157.745373998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.168692 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76vhw"] Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.268503 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6wkl\" (UniqueName: \"kubernetes.io/projected/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-kube-api-access-z6wkl\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.268547 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-utilities\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.268587 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-catalog-content\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.268646 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.268945 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.768929783 +0000 UTC m=+157.847027910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.269196 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-utilities\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.269218 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-catalog-content\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.312432 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.369617 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.370026 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.870005321 +0000 UTC m=+157.948103448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.370079 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.370402 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.87039511 +0000 UTC m=+157.948493227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.376645 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkhpx"] Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.395213 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6wkl\" (UniqueName: \"kubernetes.io/projected/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-kube-api-access-z6wkl\") pod \"certified-operators-qpmgn\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: W1123 00:09:25.427943 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8ba4c32_40bf_4cdb_b7e3_cdf92ccbfc42.slice/crio-59520d30614d4ca4e83d98f23903156ed849266c205408dc135055aa40c68495 WatchSource:0}: Error finding container 59520d30614d4ca4e83d98f23903156ed849266c205408dc135055aa40c68495: Status 404 returned error can't find the container with id 59520d30614d4ca4e83d98f23903156ed849266c205408dc135055aa40c68495 Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.440991 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.464089 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.471073 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.471295 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.971261123 +0000 UTC m=+158.049359250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.471810 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.472300 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:25.972284368 +0000 UTC m=+158.050382495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.522273 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerStarted","Data":"59520d30614d4ca4e83d98f23903156ed849266c205408dc135055aa40c68495"} Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.522430 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xgh54" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.533736 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"796435d5-3cf9-4caf-a033-f58fc197ba12","Type":"ContainerStarted","Data":"7943309f1caadc5e830c2b6b766d138b377bf1c8b43055ab83390608cd25af39"} Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.543091 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerStarted","Data":"318072d9d108c089cfcfa2efe839b7ffab0c3cf67fcac319ef370a7fa473bfc3"} Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.553249 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05171497-286a-4f55-b08b-e8854eaf10a7","Type":"ContainerDied","Data":"9470d1b8d4705f19a2a2412aad1e28280520a28ffa265754d104659794e6ef02"} Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.553299 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9470d1b8d4705f19a2a2412aad1e28280520a28ffa265754d104659794e6ef02" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.553409 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.574262 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.574772 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.074729689 +0000 UTC m=+158.152827826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.676323 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.681056 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxwkd"] Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.691027 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.191004203 +0000 UTC m=+158.269102330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.699887 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:25 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:25 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:25 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.700032 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.777755 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.777902 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.277872869 +0000 UTC m=+158.355971006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.778116 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.778722 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.278709979 +0000 UTC m=+158.356808106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.849294 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpmgn"] Nov 23 00:09:25 crc kubenswrapper[4743]: W1123 00:09:25.855942 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44f650f2_2d6e_40a1_9e6e_4a77dad347cd.slice/crio-845d5c675fc4bd5a80880e4abc49fd27a01a7d2ab93dc790e05e63e9c5ba5d09 WatchSource:0}: Error finding container 845d5c675fc4bd5a80880e4abc49fd27a01a7d2ab93dc790e05e63e9c5ba5d09: Status 404 returned error can't find the container with id 845d5c675fc4bd5a80880e4abc49fd27a01a7d2ab93dc790e05e63e9c5ba5d09 Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.879031 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.879157 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.379133301 +0000 UTC m=+158.457231428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.879390 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.879870 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.379856759 +0000 UTC m=+158.457954886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:25 crc kubenswrapper[4743]: I1123 00:09:25.981264 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:25 crc kubenswrapper[4743]: E1123 00:09:25.981669 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.481646074 +0000 UTC m=+158.559744201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.082911 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.083580 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.583552382 +0000 UTC m=+158.661650519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.184592 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.184652 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.68463044 +0000 UTC m=+158.762728567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.185074 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.185428 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.685419899 +0000 UTC m=+158.763518026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.286538 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.286790 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.786769843 +0000 UTC m=+158.864867970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.388364 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.388905 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.888878376 +0000 UTC m=+158.966976503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.489629 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.489943 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.989913453 +0000 UTC m=+159.068011600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.490085 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.490469 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:26.990459456 +0000 UTC m=+159.068557593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.560172 4743 generic.go:334] "Generic (PLEG): container finished" podID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerID="d2377041ed873a94d0a7e61d73584cd7a454658cbd8614cfa327049568eeccb4" exitCode=0 Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.560249 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerDied","Data":"d2377041ed873a94d0a7e61d73584cd7a454658cbd8614cfa327049568eeccb4"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.560279 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerStarted","Data":"845d5c675fc4bd5a80880e4abc49fd27a01a7d2ab93dc790e05e63e9c5ba5d09"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.561901 4743 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.565131 4743 generic.go:334] "Generic (PLEG): container finished" podID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerID="5584e43a4b1e58ef79dfe15758d39d21a75575374863d1633ee4946967f681a1" exitCode=0 Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.565209 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerDied","Data":"5584e43a4b1e58ef79dfe15758d39d21a75575374863d1633ee4946967f681a1"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.569252 4743 generic.go:334] "Generic (PLEG): container finished" podID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerID="d331ef9b32f55c581069007bc90c5634200b6d1621ad0901dc3626ce5df0554f" exitCode=0 Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.569799 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerDied","Data":"d331ef9b32f55c581069007bc90c5634200b6d1621ad0901dc3626ce5df0554f"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.569843 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerStarted","Data":"392a4503bab5dc92e34318059a3b06c09cba49dd61f3fea857803d8c9ab36123"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.573600 4743 generic.go:334] "Generic (PLEG): container finished" podID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerID="a09f98303e69b82227a0dbb98b9c0d3d809129aaf11e8c78f2dd9babd73c57c3" exitCode=0 Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.574715 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerDied","Data":"a09f98303e69b82227a0dbb98b9c0d3d809129aaf11e8c78f2dd9babd73c57c3"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.579635 4743 generic.go:334] "Generic (PLEG): container finished" podID="796435d5-3cf9-4caf-a033-f58fc197ba12" containerID="fbf2705806ad71b044d7e0ec731634b4d5719986d18f2ef07bbac36e724bfcb0" exitCode=0 Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.579696 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"796435d5-3cf9-4caf-a033-f58fc197ba12","Type":"ContainerDied","Data":"fbf2705806ad71b044d7e0ec731634b4d5719986d18f2ef07bbac36e724bfcb0"} Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.591186 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.591405 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.09136718 +0000 UTC m=+159.169465307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.592005 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.592703 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.092689372 +0000 UTC m=+159.170787699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.687630 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-chzlq"] Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.689262 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.691568 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.693174 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.693629 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.193608686 +0000 UTC m=+159.271706813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.694051 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:26 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:26 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:26 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.694094 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.742987 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chzlq"] Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.795375 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-utilities\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.795441 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.795746 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-catalog-content\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.795810 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.295795801 +0000 UTC m=+159.373893928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.795909 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwp7j\" (UniqueName: \"kubernetes.io/projected/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-kube-api-access-wwp7j\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.896761 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.897003 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.396962891 +0000 UTC m=+159.475061018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.897219 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-utilities\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.897270 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.897368 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-catalog-content\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.897418 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwp7j\" (UniqueName: \"kubernetes.io/projected/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-kube-api-access-wwp7j\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.897748 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-utilities\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: E1123 00:09:26.897891 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.397857593 +0000 UTC m=+159.475955760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.898039 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-catalog-content\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:26 crc kubenswrapper[4743]: I1123 00:09:26.939594 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwp7j\" (UniqueName: \"kubernetes.io/projected/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-kube-api-access-wwp7j\") pod \"redhat-marketplace-chzlq\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.000332 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.000642 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.500589101 +0000 UTC m=+159.578687258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.001111 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.001561 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.501546754 +0000 UTC m=+159.579644881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.007271 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.096585 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kmkbh"] Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.097972 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.102147 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmkbh"] Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.103010 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.103203 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.603172735 +0000 UTC m=+159.681270872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.103509 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.103937 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.603917953 +0000 UTC m=+159.682016090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.159775 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.170538 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gdmf8" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.206900 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.207985 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-utilities\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.208015 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkrx\" (UniqueName: \"kubernetes.io/projected/d27fe0a1-5c04-4f21-b376-d31db2fc095c-kube-api-access-2dkrx\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.208098 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-catalog-content\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.210768 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.710711538 +0000 UTC m=+159.788809675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.309822 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-utilities\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.310161 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkrx\" (UniqueName: \"kubernetes.io/projected/d27fe0a1-5c04-4f21-b376-d31db2fc095c-kube-api-access-2dkrx\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.310187 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.310260 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-catalog-content\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.310988 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-catalog-content\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.311273 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.811259623 +0000 UTC m=+159.889357750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.313373 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-utilities\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.343273 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkrx\" (UniqueName: \"kubernetes.io/projected/d27fe0a1-5c04-4f21-b376-d31db2fc095c-kube-api-access-2dkrx\") pod \"redhat-marketplace-kmkbh\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.362127 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chzlq"] Nov 23 00:09:27 crc kubenswrapper[4743]: W1123 00:09:27.379611 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2dcacc3_4d6d_4979_9e22_7ea3b0b557da.slice/crio-92c3c8b0088694a03ff2b67f811cf81e719f8ba62a4b03a2a4309de1c6300972 WatchSource:0}: Error finding container 92c3c8b0088694a03ff2b67f811cf81e719f8ba62a4b03a2a4309de1c6300972: Status 404 returned error can't find the container with id 92c3c8b0088694a03ff2b67f811cf81e719f8ba62a4b03a2a4309de1c6300972 Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.411567 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.411811 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.911772118 +0000 UTC m=+159.989870245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.411883 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.412264 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:27.912246549 +0000 UTC m=+159.990344676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.474533 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.512694 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.512948 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.012914047 +0000 UTC m=+160.091012164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.513016 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.513394 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.013375938 +0000 UTC m=+160.091474065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.586307 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerStarted","Data":"92c3c8b0088694a03ff2b67f811cf81e719f8ba62a4b03a2a4309de1c6300972"} Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.614208 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.614398 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.114366914 +0000 UTC m=+160.192465041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.615032 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.616069 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.116045525 +0000 UTC m=+160.194143652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.686274 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmkbh"] Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.690226 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmjnm"] Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.690398 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:27 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:27 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:27 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.690479 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.691363 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.695132 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.711044 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmjnm"] Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.716181 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.716416 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjn2\" (UniqueName: \"kubernetes.io/projected/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-kube-api-access-fcjn2\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.716454 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-catalog-content\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.716557 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.216522748 +0000 UTC m=+160.294620885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.716617 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.716724 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-utilities\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.717023 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.21701072 +0000 UTC m=+160.295108857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.819882 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.820083 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-utilities\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.820173 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjn2\" (UniqueName: \"kubernetes.io/projected/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-kube-api-access-fcjn2\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.820202 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-catalog-content\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.820535 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.320510757 +0000 UTC m=+160.398608884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.820870 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-utilities\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.821471 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-catalog-content\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.851378 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjn2\" (UniqueName: \"kubernetes.io/projected/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-kube-api-access-fcjn2\") pod \"redhat-operators-rmjnm\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.866951 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.921067 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/796435d5-3cf9-4caf-a033-f58fc197ba12-kube-api-access\") pod \"796435d5-3cf9-4caf-a033-f58fc197ba12\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.921280 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/796435d5-3cf9-4caf-a033-f58fc197ba12-kubelet-dir\") pod \"796435d5-3cf9-4caf-a033-f58fc197ba12\" (UID: \"796435d5-3cf9-4caf-a033-f58fc197ba12\") " Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.921564 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:27 crc kubenswrapper[4743]: E1123 00:09:27.921963 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.421948723 +0000 UTC m=+160.500046850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.922136 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/796435d5-3cf9-4caf-a033-f58fc197ba12-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "796435d5-3cf9-4caf-a033-f58fc197ba12" (UID: "796435d5-3cf9-4caf-a033-f58fc197ba12"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:09:27 crc kubenswrapper[4743]: I1123 00:09:27.934758 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796435d5-3cf9-4caf-a033-f58fc197ba12-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "796435d5-3cf9-4caf-a033-f58fc197ba12" (UID: "796435d5-3cf9-4caf-a033-f58fc197ba12"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.011076 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.022019 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.022314 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.522269993 +0000 UTC m=+160.600368130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.022411 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.022477 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/796435d5-3cf9-4caf-a033-f58fc197ba12-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.022512 4743 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/796435d5-3cf9-4caf-a033-f58fc197ba12-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.022863 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.522853317 +0000 UTC m=+160.600951444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.086011 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cfgc6"] Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.086238 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796435d5-3cf9-4caf-a033-f58fc197ba12" containerName="pruner" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.086250 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="796435d5-3cf9-4caf-a033-f58fc197ba12" containerName="pruner" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.086344 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="796435d5-3cf9-4caf-a033-f58fc197ba12" containerName="pruner" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.087040 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.117700 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cfgc6"] Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.130891 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.131090 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.631071927 +0000 UTC m=+160.709170054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.131182 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-utilities\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.131219 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-catalog-content\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.131245 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh76t\" (UniqueName: \"kubernetes.io/projected/cd3f05c7-315b-49fe-8a51-e17dba6b426d-kube-api-access-sh76t\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.131389 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.131910 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.631884257 +0000 UTC m=+160.709982384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.135679 4743 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7mnv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]log ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]etcd ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/generic-apiserver-start-informers ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/max-in-flight-filter ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 23 00:09:28 crc kubenswrapper[4743]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 23 00:09:28 crc kubenswrapper[4743]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/project.openshift.io-projectcache ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 23 00:09:28 crc kubenswrapper[4743]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 23 00:09:28 crc kubenswrapper[4743]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 23 00:09:28 crc kubenswrapper[4743]: livez check failed Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.135734 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" podUID="203f4e5b-490a-43cb-90db-8beed3234d54" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.171820 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-g77gl" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.240193 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.240398 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh76t\" (UniqueName: \"kubernetes.io/projected/cd3f05c7-315b-49fe-8a51-e17dba6b426d-kube-api-access-sh76t\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.240634 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-utilities\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.240681 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-catalog-content\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.241142 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-catalog-content\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.241231 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.741208694 +0000 UTC m=+160.819306821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.242954 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-utilities\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.268983 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh76t\" (UniqueName: \"kubernetes.io/projected/cd3f05c7-315b-49fe-8a51-e17dba6b426d-kube-api-access-sh76t\") pod \"redhat-operators-cfgc6\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.321932 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmjnm"] Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.343207 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.843188684 +0000 UTC m=+160.921286811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.342757 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.416837 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.445223 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.445383 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.945352508 +0000 UTC m=+161.023450635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.445521 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.445873 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:28.94586389 +0000 UTC m=+161.023962017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.546275 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.546850 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.046827235 +0000 UTC m=+161.124925362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.596237 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerStarted","Data":"d9acb553dd84eff40a52c26d29ff8ef12a761cb97a2ed635104055c65b330262"} Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.597639 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerStarted","Data":"ed245aa00cab6f7fdbd32f74746b55a9b72f8f6a1d9e5a6c44715c3eb6d79e75"} Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.599226 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"796435d5-3cf9-4caf-a033-f58fc197ba12","Type":"ContainerDied","Data":"7943309f1caadc5e830c2b6b766d138b377bf1c8b43055ab83390608cd25af39"} Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.599285 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7943309f1caadc5e830c2b6b766d138b377bf1c8b43055ab83390608cd25af39" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.599251 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.603275 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerStarted","Data":"4b2abe06f537c20ac545ef70f13058ae5af01be5524b2b779e1f4cc8834af8b2"} Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.611374 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cfgc6"] Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.648228 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.648711 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.148692432 +0000 UTC m=+161.226790579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.689537 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:28 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:28 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:28 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.689658 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.749077 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.749365 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.24933032 +0000 UTC m=+161.327428457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.749448 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.749908 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.249890303 +0000 UTC m=+161.327988430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.851362 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.851870 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.351847973 +0000 UTC m=+161.429946100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:28 crc kubenswrapper[4743]: I1123 00:09:28.952940 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:28 crc kubenswrapper[4743]: E1123 00:09:28.953437 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.453413762 +0000 UTC m=+161.531511889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.054558 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.054810 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.554763907 +0000 UTC m=+161.632862034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.054867 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.055333 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.55532182 +0000 UTC m=+161.633419957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.156554 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.156972 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.656948752 +0000 UTC m=+161.735046879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.259238 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.259678 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.759656979 +0000 UTC m=+161.837755106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.360647 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.360891 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.86086067 +0000 UTC m=+161.938958797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.361204 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.361752 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.861742321 +0000 UTC m=+161.939840448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.462842 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.463268 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:29.96325124 +0000 UTC m=+162.041349367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.564519 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.564980 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:30.064956583 +0000 UTC m=+162.143054720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.616161 4743 generic.go:334] "Generic (PLEG): container finished" podID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerID="ebee248e3d1d80df4c2715a2f292373cbbfa5b5bb97908e23b24558d5151dde0" exitCode=0 Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.616631 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerDied","Data":"ebee248e3d1d80df4c2715a2f292373cbbfa5b5bb97908e23b24558d5151dde0"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.623100 4743 generic.go:334] "Generic (PLEG): container finished" podID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerID="a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d" exitCode=0 Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.623206 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerDied","Data":"a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.623241 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerStarted","Data":"fff93ae0bf5e99eb242b40756b83d6bc9332753be40915a7d16e0f27b95c64f9"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.629801 4743 generic.go:334] "Generic (PLEG): container finished" podID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerID="4b2abe06f537c20ac545ef70f13058ae5af01be5524b2b779e1f4cc8834af8b2" exitCode=0 Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.629933 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerDied","Data":"4b2abe06f537c20ac545ef70f13058ae5af01be5524b2b779e1f4cc8834af8b2"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.635364 4743 generic.go:334] "Generic (PLEG): container finished" podID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerID="7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653" exitCode=0 Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.635771 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerDied","Data":"7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.644284 4743 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.646836 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" event={"ID":"76c94f30-89a4-408d-8168-a49eb3869a39","Type":"ContainerStarted","Data":"2b8d9be91351664d0da29723c7263d384bf1b4c4192f7eb3c7db0d25e3b1ae03"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.646899 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" event={"ID":"76c94f30-89a4-408d-8168-a49eb3869a39","Type":"ContainerStarted","Data":"168f96874d5aee0e53f538a0527813bcfbee3556ddb6f3d22e83b8c6d15806d6"} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.666465 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.668028 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:30.168003988 +0000 UTC m=+162.246102115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.691016 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:29 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:29 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:29 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.691086 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.768917 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.769387 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:30.269367253 +0000 UTC m=+162.347465380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.870365 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.870606 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:30.370565264 +0000 UTC m=+162.448663391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.871282 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.871958 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 00:09:30.371941577 +0000 UTC m=+162.450039704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t4jq5" (UID: "87578262-f89f-4b5c-92ab-a94000397e31") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.957145 4743 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-23T00:09:29.644305057Z","Handler":null,"Name":""} Nov 23 00:09:29 crc kubenswrapper[4743]: I1123 00:09:29.972014 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:29 crc kubenswrapper[4743]: E1123 00:09:29.972598 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 00:09:30.472573345 +0000 UTC m=+162.550671472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.059355 4743 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.059831 4743 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.074436 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.086315 4743 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.086389 4743 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.155384 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t4jq5\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.176329 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.185573 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.408716 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.631126 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t4jq5"] Nov 23 00:09:30 crc kubenswrapper[4743]: W1123 00:09:30.641318 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87578262_f89f_4b5c_92ab_a94000397e31.slice/crio-78b9d1ccbb687865b827ff8e12fe054a745d1d85cdc2f12938d83361842697e0 WatchSource:0}: Error finding container 78b9d1ccbb687865b827ff8e12fe054a745d1d85cdc2f12938d83361842697e0: Status 404 returned error can't find the container with id 78b9d1ccbb687865b827ff8e12fe054a745d1d85cdc2f12938d83361842697e0 Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.678754 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" event={"ID":"76c94f30-89a4-408d-8168-a49eb3869a39","Type":"ContainerStarted","Data":"7a90f3ef622ac095a51f867d4d8f03460ed8c936f32bc199b6a4bd70a0e1f42d"} Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.693428 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:30 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:30 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:30 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.693512 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.697255 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" event={"ID":"87578262-f89f-4b5c-92ab-a94000397e31","Type":"ContainerStarted","Data":"78b9d1ccbb687865b827ff8e12fe054a745d1d85cdc2f12938d83361842697e0"} Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.714730 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l6msj" podStartSLOduration=21.714706505 podStartE2EDuration="21.714706505s" podCreationTimestamp="2025-11-23 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:30.708965337 +0000 UTC m=+162.787063484" watchObservedRunningTime="2025-11-23 00:09:30.714706505 +0000 UTC m=+162.792804632" Nov 23 00:09:30 crc kubenswrapper[4743]: I1123 00:09:30.731880 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 23 00:09:31 crc kubenswrapper[4743]: I1123 00:09:31.688558 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:31 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:31 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:31 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:31 crc kubenswrapper[4743]: I1123 00:09:31.688963 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:31 crc kubenswrapper[4743]: I1123 00:09:31.705158 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" event={"ID":"87578262-f89f-4b5c-92ab-a94000397e31","Type":"ContainerStarted","Data":"0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694"} Nov 23 00:09:31 crc kubenswrapper[4743]: I1123 00:09:31.838931 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:31 crc kubenswrapper[4743]: I1123 00:09:31.843667 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z7mnv" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.032673 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.032696 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.032748 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.032781 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.363972 4743 patch_prober.go:28] interesting pod/console-f9d7485db-k4dzd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.364033 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k4dzd" podUID="fccac410-c6c3-454f-938c-64beeb04e317" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.688454 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:32 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:32 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:32 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.688878 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.713255 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.738242 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" podStartSLOduration=142.738212051 podStartE2EDuration="2m22.738212051s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:09:32.735089996 +0000 UTC m=+164.813188143" watchObservedRunningTime="2025-11-23 00:09:32.738212051 +0000 UTC m=+164.816310178" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.759220 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:09:32 crc kubenswrapper[4743]: I1123 00:09:32.831426 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b8wg9" Nov 23 00:09:33 crc kubenswrapper[4743]: I1123 00:09:33.687093 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:33 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:33 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:33 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:33 crc kubenswrapper[4743]: I1123 00:09:33.687167 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:34 crc kubenswrapper[4743]: I1123 00:09:34.549785 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:34 crc kubenswrapper[4743]: I1123 00:09:34.556185 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ea31d8-fd1d-4396-9b78-3058666d315a-metrics-certs\") pod \"network-metrics-daemon-t8ddf\" (UID: \"24ea31d8-fd1d-4396-9b78-3058666d315a\") " pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:34 crc kubenswrapper[4743]: I1123 00:09:34.674454 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8ddf" Nov 23 00:09:34 crc kubenswrapper[4743]: I1123 00:09:34.688178 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:34 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:34 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:34 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:34 crc kubenswrapper[4743]: I1123 00:09:34.688250 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:34 crc kubenswrapper[4743]: I1123 00:09:34.964086 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-t8ddf"] Nov 23 00:09:34 crc kubenswrapper[4743]: W1123 00:09:34.977729 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ea31d8_fd1d_4396_9b78_3058666d315a.slice/crio-73b757cec360c256ee1159d85cfc52b08de5a5884231e3f6970b7d16f829f8e2 WatchSource:0}: Error finding container 73b757cec360c256ee1159d85cfc52b08de5a5884231e3f6970b7d16f829f8e2: Status 404 returned error can't find the container with id 73b757cec360c256ee1159d85cfc52b08de5a5884231e3f6970b7d16f829f8e2 Nov 23 00:09:35 crc kubenswrapper[4743]: I1123 00:09:35.687598 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:35 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:35 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:35 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:35 crc kubenswrapper[4743]: I1123 00:09:35.688098 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:35 crc kubenswrapper[4743]: I1123 00:09:35.756615 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" event={"ID":"24ea31d8-fd1d-4396-9b78-3058666d315a","Type":"ContainerStarted","Data":"76f13083d27fc88594e1c698484194bfed34b0298f9da30e8d131bed63dcc68b"} Nov 23 00:09:35 crc kubenswrapper[4743]: I1123 00:09:35.756694 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" event={"ID":"24ea31d8-fd1d-4396-9b78-3058666d315a","Type":"ContainerStarted","Data":"73b757cec360c256ee1159d85cfc52b08de5a5884231e3f6970b7d16f829f8e2"} Nov 23 00:09:36 crc kubenswrapper[4743]: I1123 00:09:36.687554 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:36 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:36 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:36 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:36 crc kubenswrapper[4743]: I1123 00:09:36.687660 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:37 crc kubenswrapper[4743]: I1123 00:09:37.687703 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:37 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:37 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:37 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:37 crc kubenswrapper[4743]: I1123 00:09:37.687775 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:38 crc kubenswrapper[4743]: I1123 00:09:38.687201 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:38 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:38 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:38 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:38 crc kubenswrapper[4743]: I1123 00:09:38.687283 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:39 crc kubenswrapper[4743]: I1123 00:09:39.688562 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:39 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:39 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:39 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:39 crc kubenswrapper[4743]: I1123 00:09:39.688913 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:40 crc kubenswrapper[4743]: I1123 00:09:40.694750 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 00:09:40 crc kubenswrapper[4743]: [-]has-synced failed: reason withheld Nov 23 00:09:40 crc kubenswrapper[4743]: [+]process-running ok Nov 23 00:09:40 crc kubenswrapper[4743]: healthz check failed Nov 23 00:09:40 crc kubenswrapper[4743]: I1123 00:09:40.694858 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 00:09:41 crc kubenswrapper[4743]: I1123 00:09:41.688907 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:41 crc kubenswrapper[4743]: I1123 00:09:41.692088 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sqgm6" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.032462 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.032543 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.032601 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.032626 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.032694 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.033364 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8b46031a580b041f2fd1aaca9e9437b9de93673d25c1781f7d80e5d31bbaf61b"} pod="openshift-console/downloads-7954f5f757-dfr5p" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.033464 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" containerID="cri-o://8b46031a580b041f2fd1aaca9e9437b9de93673d25c1781f7d80e5d31bbaf61b" gracePeriod=2 Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.034737 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.034765 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.364047 4743 patch_prober.go:28] interesting pod/console-f9d7485db-k4dzd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.364107 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k4dzd" podUID="fccac410-c6c3-454f-938c-64beeb04e317" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.826834 4743 generic.go:334] "Generic (PLEG): container finished" podID="a529fd56-b206-4ec0-984e-addbd17374ee" containerID="8b46031a580b041f2fd1aaca9e9437b9de93673d25c1781f7d80e5d31bbaf61b" exitCode=0 Nov 23 00:09:42 crc kubenswrapper[4743]: I1123 00:09:42.826903 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfr5p" event={"ID":"a529fd56-b206-4ec0-984e-addbd17374ee","Type":"ContainerDied","Data":"8b46031a580b041f2fd1aaca9e9437b9de93673d25c1781f7d80e5d31bbaf61b"} Nov 23 00:09:50 crc kubenswrapper[4743]: I1123 00:09:50.417995 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:09:52 crc kubenswrapper[4743]: I1123 00:09:52.034940 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:09:52 crc kubenswrapper[4743]: I1123 00:09:52.035355 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:09:52 crc kubenswrapper[4743]: I1123 00:09:52.142271 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fx5d9" Nov 23 00:09:52 crc kubenswrapper[4743]: I1123 00:09:52.369679 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:52 crc kubenswrapper[4743]: I1123 00:09:52.375848 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-k4dzd" Nov 23 00:09:53 crc kubenswrapper[4743]: I1123 00:09:53.690573 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:09:53 crc kubenswrapper[4743]: I1123 00:09:53.690654 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:09:54 crc kubenswrapper[4743]: E1123 00:09:54.643077 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 00:09:54 crc kubenswrapper[4743]: E1123 00:09:54.643340 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rmjnm_openshift-marketplace(a93593fa-0b89-4a93-8edf-32e2e6c3b1d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:09:54 crc kubenswrapper[4743]: E1123 00:09:54.644653 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rmjnm" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" Nov 23 00:09:55 crc kubenswrapper[4743]: I1123 00:09:55.911951 4743 generic.go:334] "Generic (PLEG): container finished" podID="2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae" containerID="24aca8838ad4fac1e38ad81127b442c63866e5fec0f1c295af85130086e18e17" exitCode=0 Nov 23 00:09:55 crc kubenswrapper[4743]: I1123 00:09:55.912001 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29397600-gj2zp" event={"ID":"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae","Type":"ContainerDied","Data":"24aca8838ad4fac1e38ad81127b442c63866e5fec0f1c295af85130086e18e17"} Nov 23 00:09:58 crc kubenswrapper[4743]: I1123 00:09:58.863468 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 00:10:02 crc kubenswrapper[4743]: I1123 00:10:02.032998 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:02 crc kubenswrapper[4743]: I1123 00:10:02.033550 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:02 crc kubenswrapper[4743]: E1123 00:10:02.630064 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rmjnm" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" Nov 23 00:10:05 crc kubenswrapper[4743]: E1123 00:10:05.429324 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 00:10:05 crc kubenswrapper[4743]: E1123 00:10:05.430023 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v44t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wxwkd_openshift-marketplace(a5e3c40b-628a-443f-84b7-0ba7cb77aa1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:05 crc kubenswrapper[4743]: E1123 00:10:05.431256 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wxwkd" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" Nov 23 00:10:10 crc kubenswrapper[4743]: E1123 00:10:10.339409 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wxwkd" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" Nov 23 00:10:12 crc kubenswrapper[4743]: I1123 00:10:12.032920 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:12 crc kubenswrapper[4743]: I1123 00:10:12.033025 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:13 crc kubenswrapper[4743]: E1123 00:10:13.070600 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 00:10:13 crc kubenswrapper[4743]: E1123 00:10:13.071241 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s8zw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-76vhw_openshift-marketplace(d65fc52f-316d-4e63-99f0-998c7fb04d89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:13 crc kubenswrapper[4743]: E1123 00:10:13.072600 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-76vhw" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" Nov 23 00:10:22 crc kubenswrapper[4743]: I1123 00:10:22.032656 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:22 crc kubenswrapper[4743]: I1123 00:10:22.033135 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:23 crc kubenswrapper[4743]: I1123 00:10:23.690301 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:10:23 crc kubenswrapper[4743]: I1123 00:10:23.690847 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:10:23 crc kubenswrapper[4743]: I1123 00:10:23.690941 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:10:23 crc kubenswrapper[4743]: I1123 00:10:23.692091 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:10:23 crc kubenswrapper[4743]: I1123 00:10:23.692219 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4" gracePeriod=600 Nov 23 00:10:25 crc kubenswrapper[4743]: E1123 00:10:25.398864 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 00:10:25 crc kubenswrapper[4743]: E1123 00:10:25.400199 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6wkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qpmgn_openshift-marketplace(44f650f2-2d6e-40a1-9e6e-4a77dad347cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:25 crc kubenswrapper[4743]: E1123 00:10:25.401631 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qpmgn" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" Nov 23 00:10:32 crc kubenswrapper[4743]: I1123 00:10:32.032686 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:32 crc kubenswrapper[4743]: I1123 00:10:32.033852 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:37 crc kubenswrapper[4743]: E1123 00:10:37.200312 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 00:10:37 crc kubenswrapper[4743]: E1123 00:10:37.202550 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh76t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cfgc6_openshift-marketplace(cd3f05c7-315b-49fe-8a51-e17dba6b426d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:37 crc kubenswrapper[4743]: E1123 00:10:37.204147 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cfgc6" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.260651 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29397600-gj2zp" event={"ID":"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae","Type":"ContainerDied","Data":"9304a0c64fb29a2a633517d1c85b80055fde4e0d6be794dc4587ccbc0da605f1"} Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.260740 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9304a0c64fb29a2a633517d1c85b80055fde4e0d6be794dc4587ccbc0da605f1" Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.264181 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.399036 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-serviceca\") pod \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.399143 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-kube-api-access-4ggdv\") pod \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\" (UID: \"2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae\") " Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.399993 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-serviceca" (OuterVolumeSpecName: "serviceca") pod "2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae" (UID: "2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.407237 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-kube-api-access-4ggdv" (OuterVolumeSpecName: "kube-api-access-4ggdv") pod "2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae" (UID: "2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae"). InnerVolumeSpecName "kube-api-access-4ggdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.501422 4743 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 00:10:37 crc kubenswrapper[4743]: I1123 00:10:37.501514 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae-kube-api-access-4ggdv\") on node \"crc\" DevicePath \"\"" Nov 23 00:10:38 crc kubenswrapper[4743]: I1123 00:10:38.269648 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29397600-gj2zp" Nov 23 00:10:39 crc kubenswrapper[4743]: I1123 00:10:39.280022 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4" exitCode=0 Nov 23 00:10:39 crc kubenswrapper[4743]: I1123 00:10:39.280085 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4"} Nov 23 00:10:42 crc kubenswrapper[4743]: I1123 00:10:42.033361 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:42 crc kubenswrapper[4743]: I1123 00:10:42.034117 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:42 crc kubenswrapper[4743]: E1123 00:10:42.508731 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 00:10:42 crc kubenswrapper[4743]: E1123 00:10:42.508997 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hlp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fkhpx_openshift-marketplace(b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:42 crc kubenswrapper[4743]: E1123 00:10:42.510205 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fkhpx" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.616841 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cfgc6" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.617416 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fkhpx" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.629656 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.629901 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dkrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kmkbh_openshift-marketplace(d27fe0a1-5c04-4f21-b376-d31db2fc095c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.631188 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kmkbh" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.633839 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.634081 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwp7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-chzlq_openshift-marketplace(d2dcacc3-4d6d-4979-9e22-7ea3b0b557da): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:10:43 crc kubenswrapper[4743]: E1123 00:10:43.635320 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-chzlq" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" Nov 23 00:10:44 crc kubenswrapper[4743]: E1123 00:10:44.313342 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kmkbh" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" Nov 23 00:10:44 crc kubenswrapper[4743]: E1123 00:10:44.314375 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-chzlq" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" Nov 23 00:10:50 crc kubenswrapper[4743]: I1123 00:10:50.353307 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfr5p" event={"ID":"a529fd56-b206-4ec0-984e-addbd17374ee","Type":"ContainerStarted","Data":"565ce06fda1f3958a077f071e4a60081062094db4f1a4891db7c76cf10640d70"} Nov 23 00:10:51 crc kubenswrapper[4743]: I1123 00:10:51.363898 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t8ddf" event={"ID":"24ea31d8-fd1d-4396-9b78-3058666d315a","Type":"ContainerStarted","Data":"80a6ff36212e8b5bd1abb6d383e1e2225dcbcf33473742c6d57f7e08cbee3902"} Nov 23 00:10:52 crc kubenswrapper[4743]: I1123 00:10:52.032420 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:52 crc kubenswrapper[4743]: I1123 00:10:52.032561 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:53 crc kubenswrapper[4743]: I1123 00:10:53.380162 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"f2b13ebe17552f951faf90e351bc90e649033afac2af3f67154656c929f99c99"} Nov 23 00:10:53 crc kubenswrapper[4743]: I1123 00:10:53.380783 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:10:53 crc kubenswrapper[4743]: I1123 00:10:53.380903 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:53 crc kubenswrapper[4743]: I1123 00:10:53.380973 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:10:53 crc kubenswrapper[4743]: I1123 00:10:53.444929 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-t8ddf" podStartSLOduration=223.444891175 podStartE2EDuration="3m43.444891175s" podCreationTimestamp="2025-11-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:10:53.441185562 +0000 UTC m=+245.519283699" watchObservedRunningTime="2025-11-23 00:10:53.444891175 +0000 UTC m=+245.522989342" Nov 23 00:10:54 crc kubenswrapper[4743]: I1123 00:10:54.389768 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:10:54 crc kubenswrapper[4743]: I1123 00:10:54.390707 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:11:02 crc kubenswrapper[4743]: I1123 00:11:02.032849 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:11:02 crc kubenswrapper[4743]: I1123 00:11:02.034308 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:11:02 crc kubenswrapper[4743]: I1123 00:11:02.032917 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:11:02 crc kubenswrapper[4743]: I1123 00:11:02.034822 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.032857 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.033745 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.032867 4743 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfr5p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.034036 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfr5p" podUID="a529fd56-b206-4ec0-984e-addbd17374ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.522119 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerStarted","Data":"c5681e254ec3f507039bfbe087e843609d64792e7553441079420165f3b004af"} Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.526537 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerStarted","Data":"75060fce8b1d17b4cd5874d4952edc4177c487b97ccec5b65fe90328c498837e"} Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.529269 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerStarted","Data":"332bdae5a210a4a582dad812c305a36d57b5c38327aef41f68a4ed922be22ce3"} Nov 23 00:11:12 crc kubenswrapper[4743]: I1123 00:11:12.531594 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerStarted","Data":"562905a2b343571513ee90839dfc3e3345a86a00aab1a9dbf58268f2f06cbb61"} Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.546099 4743 generic.go:334] "Generic (PLEG): container finished" podID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerID="c5681e254ec3f507039bfbe087e843609d64792e7553441079420165f3b004af" exitCode=0 Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.546677 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerDied","Data":"c5681e254ec3f507039bfbe087e843609d64792e7553441079420165f3b004af"} Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.549177 4743 generic.go:334] "Generic (PLEG): container finished" podID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerID="75060fce8b1d17b4cd5874d4952edc4177c487b97ccec5b65fe90328c498837e" exitCode=0 Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.549296 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerDied","Data":"75060fce8b1d17b4cd5874d4952edc4177c487b97ccec5b65fe90328c498837e"} Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.552867 4743 generic.go:334] "Generic (PLEG): container finished" podID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerID="332bdae5a210a4a582dad812c305a36d57b5c38327aef41f68a4ed922be22ce3" exitCode=0 Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.552961 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerDied","Data":"332bdae5a210a4a582dad812c305a36d57b5c38327aef41f68a4ed922be22ce3"} Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.555000 4743 generic.go:334] "Generic (PLEG): container finished" podID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerID="562905a2b343571513ee90839dfc3e3345a86a00aab1a9dbf58268f2f06cbb61" exitCode=0 Nov 23 00:11:13 crc kubenswrapper[4743]: I1123 00:11:13.555033 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerDied","Data":"562905a2b343571513ee90839dfc3e3345a86a00aab1a9dbf58268f2f06cbb61"} Nov 23 00:11:15 crc kubenswrapper[4743]: I1123 00:11:15.569357 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerStarted","Data":"8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070"} Nov 23 00:11:15 crc kubenswrapper[4743]: I1123 00:11:15.572037 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerStarted","Data":"3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87"} Nov 23 00:11:16 crc kubenswrapper[4743]: I1123 00:11:16.580820 4743 generic.go:334] "Generic (PLEG): container finished" podID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerID="8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070" exitCode=0 Nov 23 00:11:16 crc kubenswrapper[4743]: I1123 00:11:16.580915 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerDied","Data":"8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070"} Nov 23 00:11:16 crc kubenswrapper[4743]: I1123 00:11:16.603940 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxwkd" podStartSLOduration=4.076827291 podStartE2EDuration="1m52.603901745s" podCreationTimestamp="2025-11-23 00:09:24 +0000 UTC" firstStartedPulling="2025-11-23 00:09:26.572252269 +0000 UTC m=+158.650350406" lastFinishedPulling="2025-11-23 00:11:15.099326723 +0000 UTC m=+267.177424860" observedRunningTime="2025-11-23 00:11:15.593260733 +0000 UTC m=+267.671358900" watchObservedRunningTime="2025-11-23 00:11:16.603901745 +0000 UTC m=+268.681999912" Nov 23 00:11:17 crc kubenswrapper[4743]: I1123 00:11:17.591683 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerStarted","Data":"4f7e7c36e90160669e335136d61de87f431d3e900fd5a20dfde5ce59fa054556"} Nov 23 00:11:17 crc kubenswrapper[4743]: I1123 00:11:17.595337 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerStarted","Data":"78fed3756788bc4d63c516af90f5a88ba84a3f9ffacc1f603fb275ddd0bad4db"} Nov 23 00:11:17 crc kubenswrapper[4743]: I1123 00:11:17.599474 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerStarted","Data":"2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da"} Nov 23 00:11:17 crc kubenswrapper[4743]: I1123 00:11:17.603966 4743 generic.go:334] "Generic (PLEG): container finished" podID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerID="bdb97f9faac7e03ef34d3621e3933e62689e027570f971873ce894ee544712c8" exitCode=0 Nov 23 00:11:17 crc kubenswrapper[4743]: I1123 00:11:17.604074 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerDied","Data":"bdb97f9faac7e03ef34d3621e3933e62689e027570f971873ce894ee544712c8"} Nov 23 00:11:18 crc kubenswrapper[4743]: I1123 00:11:18.616245 4743 generic.go:334] "Generic (PLEG): container finished" podID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerID="2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da" exitCode=0 Nov 23 00:11:18 crc kubenswrapper[4743]: I1123 00:11:18.616338 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerDied","Data":"2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da"} Nov 23 00:11:18 crc kubenswrapper[4743]: I1123 00:11:18.621882 4743 generic.go:334] "Generic (PLEG): container finished" podID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerID="4f7e7c36e90160669e335136d61de87f431d3e900fd5a20dfde5ce59fa054556" exitCode=0 Nov 23 00:11:18 crc kubenswrapper[4743]: I1123 00:11:18.623476 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerDied","Data":"4f7e7c36e90160669e335136d61de87f431d3e900fd5a20dfde5ce59fa054556"} Nov 23 00:11:19 crc kubenswrapper[4743]: I1123 00:11:19.654356 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-76vhw" podStartSLOduration=6.271776897 podStartE2EDuration="1m55.654330593s" podCreationTimestamp="2025-11-23 00:09:24 +0000 UTC" firstStartedPulling="2025-11-23 00:09:26.568069478 +0000 UTC m=+158.646167635" lastFinishedPulling="2025-11-23 00:11:15.950623204 +0000 UTC m=+268.028721331" observedRunningTime="2025-11-23 00:11:19.651959112 +0000 UTC m=+271.730057239" watchObservedRunningTime="2025-11-23 00:11:19.654330593 +0000 UTC m=+271.732428740" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:22.051336 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dfr5p" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:23.727747 4743 patch_prober.go:28] interesting pod/router-default-5444994796-sqgm6 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:23.728310 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-sqgm6" podUID="e59f68b6-cb09-4c13-acc5-eb4b713711da" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:24.874581 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:24.874675 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:25.314319 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:25.314906 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:26.681679 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerStarted","Data":"caf99ea07e702022ba16016a9e439a40d63d947c793af594684eb55919ec0a77"} Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:26.709048 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qpmgn" podStartSLOduration=6.677796181 podStartE2EDuration="2m1.709020641s" podCreationTimestamp="2025-11-23 00:09:25 +0000 UTC" firstStartedPulling="2025-11-23 00:09:26.561646153 +0000 UTC m=+158.639744280" lastFinishedPulling="2025-11-23 00:11:21.592870603 +0000 UTC m=+273.670968740" observedRunningTime="2025-11-23 00:11:26.704375252 +0000 UTC m=+278.782473379" watchObservedRunningTime="2025-11-23 00:11:26.709020641 +0000 UTC m=+278.787118788" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:26.770088 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:26.770228 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:26.823331 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:27.391671 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:28.719721 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxwkd"] Nov 23 00:11:30 crc kubenswrapper[4743]: I1123 00:11:28.720105 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxwkd" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="registry-server" containerID="cri-o://3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87" gracePeriod=2 Nov 23 00:11:33 crc kubenswrapper[4743]: I1123 00:11:33.747827 4743 generic.go:334] "Generic (PLEG): container finished" podID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerID="3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87" exitCode=0 Nov 23 00:11:33 crc kubenswrapper[4743]: I1123 00:11:33.747892 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerDied","Data":"3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87"} Nov 23 00:11:35 crc kubenswrapper[4743]: E1123 00:11:35.314335 4743 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87 is running failed: container process not found" containerID="3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 00:11:35 crc kubenswrapper[4743]: E1123 00:11:35.315670 4743 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87 is running failed: container process not found" containerID="3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 00:11:35 crc kubenswrapper[4743]: E1123 00:11:35.316327 4743 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87 is running failed: container process not found" containerID="3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 00:11:35 crc kubenswrapper[4743]: E1123 00:11:35.316527 4743 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-wxwkd" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="registry-server" Nov 23 00:11:35 crc kubenswrapper[4743]: I1123 00:11:35.441933 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:11:35 crc kubenswrapper[4743]: I1123 00:11:35.442228 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:11:35 crc kubenswrapper[4743]: I1123 00:11:35.491178 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:11:35 crc kubenswrapper[4743]: I1123 00:11:35.816410 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:11:35 crc kubenswrapper[4743]: I1123 00:11:35.865349 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpmgn"] Nov 23 00:11:38 crc kubenswrapper[4743]: I1123 00:11:37.776877 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qpmgn" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="registry-server" containerID="cri-o://caf99ea07e702022ba16016a9e439a40d63d947c793af594684eb55919ec0a77" gracePeriod=2 Nov 23 00:11:39 crc kubenswrapper[4743]: I1123 00:11:39.798047 4743 generic.go:334] "Generic (PLEG): container finished" podID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerID="caf99ea07e702022ba16016a9e439a40d63d947c793af594684eb55919ec0a77" exitCode=0 Nov 23 00:11:39 crc kubenswrapper[4743]: I1123 00:11:39.798158 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerDied","Data":"caf99ea07e702022ba16016a9e439a40d63d947c793af594684eb55919ec0a77"} Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.045840 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.111582 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44t6\" (UniqueName: \"kubernetes.io/projected/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-kube-api-access-v44t6\") pod \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.111756 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-catalog-content\") pod \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.111856 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-utilities\") pod \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\" (UID: \"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e\") " Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.113441 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-utilities" (OuterVolumeSpecName: "utilities") pod "a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" (UID: "a5e3c40b-628a-443f-84b7-0ba7cb77aa1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.123854 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-kube-api-access-v44t6" (OuterVolumeSpecName: "kube-api-access-v44t6") pod "a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" (UID: "a5e3c40b-628a-443f-84b7-0ba7cb77aa1e"). InnerVolumeSpecName "kube-api-access-v44t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.213503 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44t6\" (UniqueName: \"kubernetes.io/projected/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-kube-api-access-v44t6\") on node \"crc\" DevicePath \"\"" Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.213559 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.814157 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxwkd" event={"ID":"a5e3c40b-628a-443f-84b7-0ba7cb77aa1e","Type":"ContainerDied","Data":"392a4503bab5dc92e34318059a3b06c09cba49dd61f3fea857803d8c9ab36123"} Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.814322 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxwkd" Nov 23 00:11:41 crc kubenswrapper[4743]: I1123 00:11:41.814657 4743 scope.go:117] "RemoveContainer" containerID="3d2155dd6137f176117af823f498bc015a9e8f5857882d29708b258780689b87" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.072998 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.127700 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-catalog-content\") pod \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.127882 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-utilities\") pod \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.127971 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6wkl\" (UniqueName: \"kubernetes.io/projected/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-kube-api-access-z6wkl\") pod \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\" (UID: \"44f650f2-2d6e-40a1-9e6e-4a77dad347cd\") " Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.129186 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-utilities" (OuterVolumeSpecName: "utilities") pod "44f650f2-2d6e-40a1-9e6e-4a77dad347cd" (UID: "44f650f2-2d6e-40a1-9e6e-4a77dad347cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.133556 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-kube-api-access-z6wkl" (OuterVolumeSpecName: "kube-api-access-z6wkl") pod "44f650f2-2d6e-40a1-9e6e-4a77dad347cd" (UID: "44f650f2-2d6e-40a1-9e6e-4a77dad347cd"). InnerVolumeSpecName "kube-api-access-z6wkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.256465 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.256530 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6wkl\" (UniqueName: \"kubernetes.io/projected/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-kube-api-access-z6wkl\") on node \"crc\" DevicePath \"\"" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.478284 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" (UID: "a5e3c40b-628a-443f-84b7-0ba7cb77aa1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.561515 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.755180 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxwkd"] Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.760786 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxwkd"] Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.825530 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpmgn" event={"ID":"44f650f2-2d6e-40a1-9e6e-4a77dad347cd","Type":"ContainerDied","Data":"845d5c675fc4bd5a80880e4abc49fd27a01a7d2ab93dc790e05e63e9c5ba5d09"} Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.825694 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpmgn" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.862354 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44f650f2-2d6e-40a1-9e6e-4a77dad347cd" (UID: "44f650f2-2d6e-40a1-9e6e-4a77dad347cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:11:42 crc kubenswrapper[4743]: I1123 00:11:42.868104 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f650f2-2d6e-40a1-9e6e-4a77dad347cd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:11:43 crc kubenswrapper[4743]: I1123 00:11:43.160254 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpmgn"] Nov 23 00:11:43 crc kubenswrapper[4743]: I1123 00:11:43.165722 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qpmgn"] Nov 23 00:11:44 crc kubenswrapper[4743]: I1123 00:11:44.731817 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" path="/var/lib/kubelet/pods/44f650f2-2d6e-40a1-9e6e-4a77dad347cd/volumes" Nov 23 00:11:44 crc kubenswrapper[4743]: I1123 00:11:44.860023 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" path="/var/lib/kubelet/pods/a5e3c40b-628a-443f-84b7-0ba7cb77aa1e/volumes" Nov 23 00:11:50 crc kubenswrapper[4743]: I1123 00:11:50.843353 4743 scope.go:117] "RemoveContainer" containerID="c5681e254ec3f507039bfbe087e843609d64792e7553441079420165f3b004af" Nov 23 00:11:54 crc kubenswrapper[4743]: I1123 00:11:54.805787 4743 scope.go:117] "RemoveContainer" containerID="d331ef9b32f55c581069007bc90c5634200b6d1621ad0901dc3626ce5df0554f" Nov 23 00:11:54 crc kubenswrapper[4743]: I1123 00:11:54.859039 4743 scope.go:117] "RemoveContainer" containerID="caf99ea07e702022ba16016a9e439a40d63d947c793af594684eb55919ec0a77" Nov 23 00:11:54 crc kubenswrapper[4743]: I1123 00:11:54.917042 4743 scope.go:117] "RemoveContainer" containerID="332bdae5a210a4a582dad812c305a36d57b5c38327aef41f68a4ed922be22ce3" Nov 23 00:11:54 crc kubenswrapper[4743]: I1123 00:11:54.970787 4743 scope.go:117] "RemoveContainer" containerID="d2377041ed873a94d0a7e61d73584cd7a454658cbd8614cfa327049568eeccb4" Nov 23 00:11:57 crc kubenswrapper[4743]: I1123 00:11:57.936307 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerStarted","Data":"2a4945e5f70015d2e52a73533ac7c3efe2a8c233d069dd00de973c18b4c00d65"} Nov 23 00:11:58 crc kubenswrapper[4743]: I1123 00:11:58.943379 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerStarted","Data":"9e4f34662392ba2c3b245f1de739083c80b30f02b1381dcc24f7ad0a1d7de018"} Nov 23 00:11:58 crc kubenswrapper[4743]: I1123 00:11:58.947209 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerStarted","Data":"0b93e358c48d451f73929e87166481a53a76a0b5e40e01a0aa1fde80aed15f8b"} Nov 23 00:11:58 crc kubenswrapper[4743]: I1123 00:11:58.950147 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerStarted","Data":"c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863"} Nov 23 00:11:58 crc kubenswrapper[4743]: I1123 00:11:58.952695 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerStarted","Data":"131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd"} Nov 23 00:11:59 crc kubenswrapper[4743]: I1123 00:11:59.983462 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-chzlq" podStartSLOduration=8.808199032 podStartE2EDuration="2m33.983424894s" podCreationTimestamp="2025-11-23 00:09:26 +0000 UTC" firstStartedPulling="2025-11-23 00:09:29.631958039 +0000 UTC m=+161.710056166" lastFinishedPulling="2025-11-23 00:11:54.807183901 +0000 UTC m=+306.885282028" observedRunningTime="2025-11-23 00:11:59.97700444 +0000 UTC m=+312.055102577" watchObservedRunningTime="2025-11-23 00:11:59.983424894 +0000 UTC m=+312.061523071" Nov 23 00:12:00 crc kubenswrapper[4743]: I1123 00:12:00.997152 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kmkbh" podStartSLOduration=21.616003958 podStartE2EDuration="2m33.997129714s" podCreationTimestamp="2025-11-23 00:09:27 +0000 UTC" firstStartedPulling="2025-11-23 00:09:29.644341938 +0000 UTC m=+161.722440065" lastFinishedPulling="2025-11-23 00:11:42.025467694 +0000 UTC m=+294.103565821" observedRunningTime="2025-11-23 00:12:00.995544494 +0000 UTC m=+313.073642661" watchObservedRunningTime="2025-11-23 00:12:00.997129714 +0000 UTC m=+313.075227841" Nov 23 00:12:01 crc kubenswrapper[4743]: I1123 00:12:01.024353 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fkhpx" podStartSLOduration=10.184579718 podStartE2EDuration="2m37.024328609s" podCreationTimestamp="2025-11-23 00:09:24 +0000 UTC" firstStartedPulling="2025-11-23 00:09:26.577105546 +0000 UTC m=+158.655203713" lastFinishedPulling="2025-11-23 00:11:53.416854477 +0000 UTC m=+305.494952604" observedRunningTime="2025-11-23 00:12:01.018614803 +0000 UTC m=+313.096712960" watchObservedRunningTime="2025-11-23 00:12:01.024328609 +0000 UTC m=+313.102426746" Nov 23 00:12:02 crc kubenswrapper[4743]: I1123 00:12:02.000404 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmjnm" podStartSLOduration=13.77853228 podStartE2EDuration="2m35.000372836s" podCreationTimestamp="2025-11-23 00:09:27 +0000 UTC" firstStartedPulling="2025-11-23 00:09:29.620615465 +0000 UTC m=+161.698713612" lastFinishedPulling="2025-11-23 00:11:50.842456001 +0000 UTC m=+302.920554168" observedRunningTime="2025-11-23 00:12:01.997478972 +0000 UTC m=+314.075577109" watchObservedRunningTime="2025-11-23 00:12:02.000372836 +0000 UTC m=+314.078470963" Nov 23 00:12:02 crc kubenswrapper[4743]: I1123 00:12:02.019417 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cfgc6" podStartSLOduration=12.802762203 podStartE2EDuration="2m34.019394432s" podCreationTimestamp="2025-11-23 00:09:28 +0000 UTC" firstStartedPulling="2025-11-23 00:09:29.625094283 +0000 UTC m=+161.703192420" lastFinishedPulling="2025-11-23 00:11:50.841726512 +0000 UTC m=+302.919824649" observedRunningTime="2025-11-23 00:12:02.016981721 +0000 UTC m=+314.095079868" watchObservedRunningTime="2025-11-23 00:12:02.019394432 +0000 UTC m=+314.097492549" Nov 23 00:12:05 crc kubenswrapper[4743]: I1123 00:12:05.034255 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:12:05 crc kubenswrapper[4743]: I1123 00:12:05.034785 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:12:05 crc kubenswrapper[4743]: I1123 00:12:05.083232 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:12:06 crc kubenswrapper[4743]: I1123 00:12:06.059416 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:12:07 crc kubenswrapper[4743]: I1123 00:12:07.008224 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:12:07 crc kubenswrapper[4743]: I1123 00:12:07.009596 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:12:07 crc kubenswrapper[4743]: I1123 00:12:07.071792 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:12:07 crc kubenswrapper[4743]: I1123 00:12:07.475532 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:12:07 crc kubenswrapper[4743]: I1123 00:12:07.476048 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:12:07 crc kubenswrapper[4743]: I1123 00:12:07.518693 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.012282 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.012363 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.054592 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.058721 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.063843 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.418106 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.418218 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:12:08 crc kubenswrapper[4743]: I1123 00:12:08.479585 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:12:09 crc kubenswrapper[4743]: I1123 00:12:09.058205 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:12:09 crc kubenswrapper[4743]: I1123 00:12:09.065345 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.196675 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmkbh"] Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.198190 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kmkbh" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="registry-server" containerID="cri-o://131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd" gracePeriod=2 Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.396297 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cfgc6"] Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.554287 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.615711 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dkrx\" (UniqueName: \"kubernetes.io/projected/d27fe0a1-5c04-4f21-b376-d31db2fc095c-kube-api-access-2dkrx\") pod \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.615798 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-utilities\") pod \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.615905 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-catalog-content\") pod \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\" (UID: \"d27fe0a1-5c04-4f21-b376-d31db2fc095c\") " Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.618429 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-utilities" (OuterVolumeSpecName: "utilities") pod "d27fe0a1-5c04-4f21-b376-d31db2fc095c" (UID: "d27fe0a1-5c04-4f21-b376-d31db2fc095c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.623406 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27fe0a1-5c04-4f21-b376-d31db2fc095c-kube-api-access-2dkrx" (OuterVolumeSpecName: "kube-api-access-2dkrx") pod "d27fe0a1-5c04-4f21-b376-d31db2fc095c" (UID: "d27fe0a1-5c04-4f21-b376-d31db2fc095c"). InnerVolumeSpecName "kube-api-access-2dkrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.632715 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d27fe0a1-5c04-4f21-b376-d31db2fc095c" (UID: "d27fe0a1-5c04-4f21-b376-d31db2fc095c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.718137 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dkrx\" (UniqueName: \"kubernetes.io/projected/d27fe0a1-5c04-4f21-b376-d31db2fc095c-kube-api-access-2dkrx\") on node \"crc\" DevicePath \"\"" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.718198 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:12:10 crc kubenswrapper[4743]: I1123 00:12:10.718212 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fe0a1-5c04-4f21-b376-d31db2fc095c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.035521 4743 generic.go:334] "Generic (PLEG): container finished" podID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerID="131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd" exitCode=0 Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.035633 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmkbh" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.035638 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerDied","Data":"131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd"} Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.035761 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmkbh" event={"ID":"d27fe0a1-5c04-4f21-b376-d31db2fc095c","Type":"ContainerDied","Data":"d9acb553dd84eff40a52c26d29ff8ef12a761cb97a2ed635104055c65b330262"} Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.035769 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cfgc6" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="registry-server" containerID="cri-o://c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863" gracePeriod=2 Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.035785 4743 scope.go:117] "RemoveContainer" containerID="131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.059539 4743 scope.go:117] "RemoveContainer" containerID="8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.060630 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmkbh"] Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.063653 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmkbh"] Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.087130 4743 scope.go:117] "RemoveContainer" containerID="7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.107138 4743 scope.go:117] "RemoveContainer" containerID="131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd" Nov 23 00:12:11 crc kubenswrapper[4743]: E1123 00:12:11.107753 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd\": container with ID starting with 131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd not found: ID does not exist" containerID="131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.107807 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd"} err="failed to get container status \"131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd\": rpc error: code = NotFound desc = could not find container \"131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd\": container with ID starting with 131a419a82e76e535584725aa140db145ec6377b58de7308f6a607e592a2bfbd not found: ID does not exist" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.107845 4743 scope.go:117] "RemoveContainer" containerID="8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070" Nov 23 00:12:11 crc kubenswrapper[4743]: E1123 00:12:11.108444 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070\": container with ID starting with 8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070 not found: ID does not exist" containerID="8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.108513 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070"} err="failed to get container status \"8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070\": rpc error: code = NotFound desc = could not find container \"8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070\": container with ID starting with 8b7482387c7e9b24b64073fdf91f13d7c0d22bbacd72a8b36acc62db7b6a2070 not found: ID does not exist" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.108543 4743 scope.go:117] "RemoveContainer" containerID="7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653" Nov 23 00:12:11 crc kubenswrapper[4743]: E1123 00:12:11.108855 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653\": container with ID starting with 7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653 not found: ID does not exist" containerID="7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.108881 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653"} err="failed to get container status \"7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653\": rpc error: code = NotFound desc = could not find container \"7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653\": container with ID starting with 7ea7b0b1f0e25123deb0a80d66972f1a570669c7e6f19f1dbcd684fae5308653 not found: ID does not exist" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.601686 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.733542 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-catalog-content\") pod \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.733631 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-utilities\") pod \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.733702 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh76t\" (UniqueName: \"kubernetes.io/projected/cd3f05c7-315b-49fe-8a51-e17dba6b426d-kube-api-access-sh76t\") pod \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\" (UID: \"cd3f05c7-315b-49fe-8a51-e17dba6b426d\") " Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.734376 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-utilities" (OuterVolumeSpecName: "utilities") pod "cd3f05c7-315b-49fe-8a51-e17dba6b426d" (UID: "cd3f05c7-315b-49fe-8a51-e17dba6b426d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.738641 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3f05c7-315b-49fe-8a51-e17dba6b426d-kube-api-access-sh76t" (OuterVolumeSpecName: "kube-api-access-sh76t") pod "cd3f05c7-315b-49fe-8a51-e17dba6b426d" (UID: "cd3f05c7-315b-49fe-8a51-e17dba6b426d"). InnerVolumeSpecName "kube-api-access-sh76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.818325 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd3f05c7-315b-49fe-8a51-e17dba6b426d" (UID: "cd3f05c7-315b-49fe-8a51-e17dba6b426d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.835639 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh76t\" (UniqueName: \"kubernetes.io/projected/cd3f05c7-315b-49fe-8a51-e17dba6b426d-kube-api-access-sh76t\") on node \"crc\" DevicePath \"\"" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.835705 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:12:11 crc kubenswrapper[4743]: I1123 00:12:11.835721 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3f05c7-315b-49fe-8a51-e17dba6b426d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.047553 4743 generic.go:334] "Generic (PLEG): container finished" podID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerID="c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863" exitCode=0 Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.047621 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerDied","Data":"c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863"} Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.047673 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfgc6" event={"ID":"cd3f05c7-315b-49fe-8a51-e17dba6b426d","Type":"ContainerDied","Data":"fff93ae0bf5e99eb242b40756b83d6bc9332753be40915a7d16e0f27b95c64f9"} Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.047732 4743 scope.go:117] "RemoveContainer" containerID="c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.047953 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfgc6" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.069153 4743 scope.go:117] "RemoveContainer" containerID="2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.097774 4743 scope.go:117] "RemoveContainer" containerID="a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.114623 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cfgc6"] Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.122249 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cfgc6"] Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.124072 4743 scope.go:117] "RemoveContainer" containerID="c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863" Nov 23 00:12:12 crc kubenswrapper[4743]: E1123 00:12:12.124991 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863\": container with ID starting with c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863 not found: ID does not exist" containerID="c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.125025 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863"} err="failed to get container status \"c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863\": rpc error: code = NotFound desc = could not find container \"c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863\": container with ID starting with c7d7a7bf29d0f934856e0faf44f6fd6805bdfc0ef8b14bf274cf16caf052f863 not found: ID does not exist" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.125070 4743 scope.go:117] "RemoveContainer" containerID="2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da" Nov 23 00:12:12 crc kubenswrapper[4743]: E1123 00:12:12.125433 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da\": container with ID starting with 2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da not found: ID does not exist" containerID="2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.125472 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da"} err="failed to get container status \"2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da\": rpc error: code = NotFound desc = could not find container \"2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da\": container with ID starting with 2b42d2eb9d1d5d894cf99723a625bfcf536ea7ef3979fe4ba6ffbe22ec43f5da not found: ID does not exist" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.125515 4743 scope.go:117] "RemoveContainer" containerID="a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d" Nov 23 00:12:12 crc kubenswrapper[4743]: E1123 00:12:12.129305 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d\": container with ID starting with a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d not found: ID does not exist" containerID="a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.129354 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d"} err="failed to get container status \"a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d\": rpc error: code = NotFound desc = could not find container \"a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d\": container with ID starting with a672d5172433b9196791a95c1cf8dd089df3d5e9c526024e015c03ca5d05680d not found: ID does not exist" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.731279 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" path="/var/lib/kubelet/pods/cd3f05c7-315b-49fe-8a51-e17dba6b426d/volumes" Nov 23 00:12:12 crc kubenswrapper[4743]: I1123 00:12:12.732408 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" path="/var/lib/kubelet/pods/d27fe0a1-5c04-4f21-b376-d31db2fc095c/volumes" Nov 23 00:12:53 crc kubenswrapper[4743]: I1123 00:12:53.690067 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:12:53 crc kubenswrapper[4743]: I1123 00:12:53.690672 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.150262 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkhpx"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.153030 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fkhpx" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="registry-server" containerID="cri-o://0b93e358c48d451f73929e87166481a53a76a0b5e40e01a0aa1fde80aed15f8b" gracePeriod=30 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.158332 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76vhw"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.158686 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-76vhw" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="registry-server" containerID="cri-o://78fed3756788bc4d63c516af90f5a88ba84a3f9ffacc1f603fb275ddd0bad4db" gracePeriod=30 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.171313 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rr7k6"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.171665 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" containerID="cri-o://ab36923f1d05128d7e08ce5c9cbf8224e46334eaeda7dea0c607cfea67db8a09" gracePeriod=30 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216191 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b7nhm"] Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216458 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216507 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216518 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216527 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216548 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae" containerName="image-pruner" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216558 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae" containerName="image-pruner" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216571 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216578 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216592 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216599 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216607 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216614 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216626 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216633 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216642 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216649 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216657 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216664 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216672 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216680 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216688 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216696 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="extract-utilities" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216706 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216712 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: E1123 00:13:08.216722 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216729 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="extract-content" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216861 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb4b460-4fcd-4dc3-b541-9b6852a7c0ae" containerName="image-pruner" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216877 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f650f2-2d6e-40a1-9e6e-4a77dad347cd" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216885 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e3c40b-628a-443f-84b7-0ba7cb77aa1e" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216894 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27fe0a1-5c04-4f21-b376-d31db2fc095c" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.216900 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3f05c7-315b-49fe-8a51-e17dba6b426d" containerName="registry-server" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.217460 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.223456 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chzlq"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.223774 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-chzlq" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="registry-server" containerID="cri-o://2a4945e5f70015d2e52a73533ac7c3efe2a8c233d069dd00de973c18b4c00d65" gracePeriod=30 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.235722 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmjnm"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.236001 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmjnm" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="registry-server" containerID="cri-o://9e4f34662392ba2c3b245f1de739083c80b30f02b1381dcc24f7ad0a1d7de018" gracePeriod=30 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.241671 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b7nhm"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.318872 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.318946 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjxn6\" (UniqueName: \"kubernetes.io/projected/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-kube-api-access-zjxn6\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.318998 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.420311 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjxn6\" (UniqueName: \"kubernetes.io/projected/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-kube-api-access-zjxn6\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.420715 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.420744 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.422466 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.426641 4743 generic.go:334] "Generic (PLEG): container finished" podID="963d8537-d384-4feb-a776-da74096c0884" containerID="ab36923f1d05128d7e08ce5c9cbf8224e46334eaeda7dea0c607cfea67db8a09" exitCode=0 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.426738 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" event={"ID":"963d8537-d384-4feb-a776-da74096c0884","Type":"ContainerDied","Data":"ab36923f1d05128d7e08ce5c9cbf8224e46334eaeda7dea0c607cfea67db8a09"} Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.428917 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.436863 4743 generic.go:334] "Generic (PLEG): container finished" podID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerID="78fed3756788bc4d63c516af90f5a88ba84a3f9ffacc1f603fb275ddd0bad4db" exitCode=0 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.436957 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerDied","Data":"78fed3756788bc4d63c516af90f5a88ba84a3f9ffacc1f603fb275ddd0bad4db"} Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.439979 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjxn6\" (UniqueName: \"kubernetes.io/projected/8f9276af-fa7c-4f89-a06e-0e89e2bcc76d-kube-api-access-zjxn6\") pod \"marketplace-operator-79b997595-b7nhm\" (UID: \"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d\") " pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.445750 4743 generic.go:334] "Generic (PLEG): container finished" podID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerID="2a4945e5f70015d2e52a73533ac7c3efe2a8c233d069dd00de973c18b4c00d65" exitCode=0 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.445845 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerDied","Data":"2a4945e5f70015d2e52a73533ac7c3efe2a8c233d069dd00de973c18b4c00d65"} Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.448599 4743 generic.go:334] "Generic (PLEG): container finished" podID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerID="9e4f34662392ba2c3b245f1de739083c80b30f02b1381dcc24f7ad0a1d7de018" exitCode=0 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.448671 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerDied","Data":"9e4f34662392ba2c3b245f1de739083c80b30f02b1381dcc24f7ad0a1d7de018"} Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.454291 4743 generic.go:334] "Generic (PLEG): container finished" podID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerID="0b93e358c48d451f73929e87166481a53a76a0b5e40e01a0aa1fde80aed15f8b" exitCode=0 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.454331 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerDied","Data":"0b93e358c48d451f73929e87166481a53a76a0b5e40e01a0aa1fde80aed15f8b"} Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.582051 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.592846 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.597879 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.628997 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.630520 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.696178 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.726172 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-utilities\") pod \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.726307 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-catalog-content\") pod \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.726494 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8zw\" (UniqueName: \"kubernetes.io/projected/d65fc52f-316d-4e63-99f0-998c7fb04d89-kube-api-access-2s8zw\") pod \"d65fc52f-316d-4e63-99f0-998c7fb04d89\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.726715 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-utilities\") pod \"d65fc52f-316d-4e63-99f0-998c7fb04d89\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.727673 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-utilities" (OuterVolumeSpecName: "utilities") pod "d65fc52f-316d-4e63-99f0-998c7fb04d89" (UID: "d65fc52f-316d-4e63-99f0-998c7fb04d89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.728023 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-catalog-content\") pod \"d65fc52f-316d-4e63-99f0-998c7fb04d89\" (UID: \"d65fc52f-316d-4e63-99f0-998c7fb04d89\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.728074 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcjn2\" (UniqueName: \"kubernetes.io/projected/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-kube-api-access-fcjn2\") pod \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\" (UID: \"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.728572 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.731798 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65fc52f-316d-4e63-99f0-998c7fb04d89-kube-api-access-2s8zw" (OuterVolumeSpecName: "kube-api-access-2s8zw") pod "d65fc52f-316d-4e63-99f0-998c7fb04d89" (UID: "d65fc52f-316d-4e63-99f0-998c7fb04d89"). InnerVolumeSpecName "kube-api-access-2s8zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.732791 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-utilities" (OuterVolumeSpecName: "utilities") pod "a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" (UID: "a93593fa-0b89-4a93-8edf-32e2e6c3b1d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.736286 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-kube-api-access-fcjn2" (OuterVolumeSpecName: "kube-api-access-fcjn2") pod "a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" (UID: "a93593fa-0b89-4a93-8edf-32e2e6c3b1d7"). InnerVolumeSpecName "kube-api-access-fcjn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.800650 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d65fc52f-316d-4e63-99f0-998c7fb04d89" (UID: "d65fc52f-316d-4e63-99f0-998c7fb04d89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831247 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hlp6\" (UniqueName: \"kubernetes.io/projected/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-kube-api-access-4hlp6\") pod \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831439 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-catalog-content\") pod \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831476 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/963d8537-d384-4feb-a776-da74096c0884-marketplace-operator-metrics\") pod \"963d8537-d384-4feb-a776-da74096c0884\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831514 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/963d8537-d384-4feb-a776-da74096c0884-marketplace-trusted-ca\") pod \"963d8537-d384-4feb-a776-da74096c0884\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831562 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-catalog-content\") pod \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831600 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-utilities\") pod \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\" (UID: \"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831627 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmkxm\" (UniqueName: \"kubernetes.io/projected/963d8537-d384-4feb-a776-da74096c0884-kube-api-access-vmkxm\") pod \"963d8537-d384-4feb-a776-da74096c0884\" (UID: \"963d8537-d384-4feb-a776-da74096c0884\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831703 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-utilities\") pod \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.831731 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwp7j\" (UniqueName: \"kubernetes.io/projected/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-kube-api-access-wwp7j\") pod \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\" (UID: \"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da\") " Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.832794 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8zw\" (UniqueName: \"kubernetes.io/projected/d65fc52f-316d-4e63-99f0-998c7fb04d89-kube-api-access-2s8zw\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.832816 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65fc52f-316d-4e63-99f0-998c7fb04d89-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.832849 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcjn2\" (UniqueName: \"kubernetes.io/projected/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-kube-api-access-fcjn2\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.832864 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.835911 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-utilities" (OuterVolumeSpecName: "utilities") pod "b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" (UID: "b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.836015 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-utilities" (OuterVolumeSpecName: "utilities") pod "d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" (UID: "d2dcacc3-4d6d-4979-9e22-7ea3b0b557da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.836031 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-kube-api-access-wwp7j" (OuterVolumeSpecName: "kube-api-access-wwp7j") pod "d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" (UID: "d2dcacc3-4d6d-4979-9e22-7ea3b0b557da"). InnerVolumeSpecName "kube-api-access-wwp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.836785 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963d8537-d384-4feb-a776-da74096c0884-kube-api-access-vmkxm" (OuterVolumeSpecName: "kube-api-access-vmkxm") pod "963d8537-d384-4feb-a776-da74096c0884" (UID: "963d8537-d384-4feb-a776-da74096c0884"). InnerVolumeSpecName "kube-api-access-vmkxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.837149 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963d8537-d384-4feb-a776-da74096c0884-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "963d8537-d384-4feb-a776-da74096c0884" (UID: "963d8537-d384-4feb-a776-da74096c0884"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.838173 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-kube-api-access-4hlp6" (OuterVolumeSpecName: "kube-api-access-4hlp6") pod "b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" (UID: "b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42"). InnerVolumeSpecName "kube-api-access-4hlp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.839573 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963d8537-d384-4feb-a776-da74096c0884-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "963d8537-d384-4feb-a776-da74096c0884" (UID: "963d8537-d384-4feb-a776-da74096c0884"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.859144 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" (UID: "a93593fa-0b89-4a93-8edf-32e2e6c3b1d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.868647 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" (UID: "d2dcacc3-4d6d-4979-9e22-7ea3b0b557da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: W1123 00:13:08.894434 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f9276af_fa7c_4f89_a06e_0e89e2bcc76d.slice/crio-0765063b8886b0a42aedcb851419ac0eebc05fbb3abdece12ed978377927e866 WatchSource:0}: Error finding container 0765063b8886b0a42aedcb851419ac0eebc05fbb3abdece12ed978377927e866: Status 404 returned error can't find the container with id 0765063b8886b0a42aedcb851419ac0eebc05fbb3abdece12ed978377927e866 Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.894942 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" (UID: "b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.896167 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b7nhm"] Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934251 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hlp6\" (UniqueName: \"kubernetes.io/projected/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-kube-api-access-4hlp6\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934283 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934292 4743 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/963d8537-d384-4feb-a776-da74096c0884-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934302 4743 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/963d8537-d384-4feb-a776-da74096c0884-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934310 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934320 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934327 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmkxm\" (UniqueName: \"kubernetes.io/projected/963d8537-d384-4feb-a776-da74096c0884-kube-api-access-vmkxm\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934336 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934344 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:08 crc kubenswrapper[4743]: I1123 00:13:08.934352 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwp7j\" (UniqueName: \"kubernetes.io/projected/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da-kube-api-access-wwp7j\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.461511 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.461475 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rr7k6" event={"ID":"963d8537-d384-4feb-a776-da74096c0884","Type":"ContainerDied","Data":"447514af9ba686c35b0836b96b3ddfa1b266195ab7a5c6198d40d9604c9320d2"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.461655 4743 scope.go:117] "RemoveContainer" containerID="ab36923f1d05128d7e08ce5c9cbf8224e46334eaeda7dea0c607cfea67db8a09" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.465223 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76vhw" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.465347 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76vhw" event={"ID":"d65fc52f-316d-4e63-99f0-998c7fb04d89","Type":"ContainerDied","Data":"318072d9d108c089cfcfa2efe839b7ffab0c3cf67fcac319ef370a7fa473bfc3"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.469730 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" event={"ID":"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d","Type":"ContainerStarted","Data":"6568e1bd63b25b83a862d6b0591ef3c09b19b5995f563046b9bca610f23b3485"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.470037 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" event={"ID":"8f9276af-fa7c-4f89-a06e-0e89e2bcc76d","Type":"ContainerStarted","Data":"0765063b8886b0a42aedcb851419ac0eebc05fbb3abdece12ed978377927e866"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.470062 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.472999 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chzlq" event={"ID":"d2dcacc3-4d6d-4979-9e22-7ea3b0b557da","Type":"ContainerDied","Data":"92c3c8b0088694a03ff2b67f811cf81e719f8ba62a4b03a2a4309de1c6300972"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.473091 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chzlq" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.476558 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.477462 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmjnm" event={"ID":"a93593fa-0b89-4a93-8edf-32e2e6c3b1d7","Type":"ContainerDied","Data":"ed245aa00cab6f7fdbd32f74746b55a9b72f8f6a1d9e5a6c44715c3eb6d79e75"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.477524 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmjnm" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.479535 4743 scope.go:117] "RemoveContainer" containerID="78fed3756788bc4d63c516af90f5a88ba84a3f9ffacc1f603fb275ddd0bad4db" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.481662 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkhpx" event={"ID":"b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42","Type":"ContainerDied","Data":"59520d30614d4ca4e83d98f23903156ed849266c205408dc135055aa40c68495"} Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.481762 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkhpx" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.498448 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b7nhm" podStartSLOduration=1.498419118 podStartE2EDuration="1.498419118s" podCreationTimestamp="2025-11-23 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:13:09.494096389 +0000 UTC m=+381.572194536" watchObservedRunningTime="2025-11-23 00:13:09.498419118 +0000 UTC m=+381.576517245" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.500379 4743 scope.go:117] "RemoveContainer" containerID="562905a2b343571513ee90839dfc3e3345a86a00aab1a9dbf58268f2f06cbb61" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.549441 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76vhw"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.553827 4743 scope.go:117] "RemoveContainer" containerID="5584e43a4b1e58ef79dfe15758d39d21a75575374863d1633ee4946967f681a1" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.554258 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-76vhw"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.557650 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rr7k6"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.559633 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rr7k6"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.577179 4743 scope.go:117] "RemoveContainer" containerID="2a4945e5f70015d2e52a73533ac7c3efe2a8c233d069dd00de973c18b4c00d65" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.580293 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chzlq"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.583088 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-chzlq"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.592247 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmjnm"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.602996 4743 scope.go:117] "RemoveContainer" containerID="bdb97f9faac7e03ef34d3621e3933e62689e027570f971873ce894ee544712c8" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.614122 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmjnm"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.623162 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkhpx"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.629126 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fkhpx"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.701845 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgddj"] Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.702614 4743 scope.go:117] "RemoveContainer" containerID="4b2abe06f537c20ac545ef70f13058ae5af01be5524b2b779e1f4cc8834af8b2" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.733456 4743 scope.go:117] "RemoveContainer" containerID="9e4f34662392ba2c3b245f1de739083c80b30f02b1381dcc24f7ad0a1d7de018" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.765474 4743 scope.go:117] "RemoveContainer" containerID="75060fce8b1d17b4cd5874d4952edc4177c487b97ccec5b65fe90328c498837e" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.793760 4743 scope.go:117] "RemoveContainer" containerID="ebee248e3d1d80df4c2715a2f292373cbbfa5b5bb97908e23b24558d5151dde0" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.806825 4743 scope.go:117] "RemoveContainer" containerID="0b93e358c48d451f73929e87166481a53a76a0b5e40e01a0aa1fde80aed15f8b" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.821227 4743 scope.go:117] "RemoveContainer" containerID="4f7e7c36e90160669e335136d61de87f431d3e900fd5a20dfde5ce59fa054556" Nov 23 00:13:09 crc kubenswrapper[4743]: I1123 00:13:09.839430 4743 scope.go:117] "RemoveContainer" containerID="a09f98303e69b82227a0dbb98b9c0d3d809129aaf11e8c78f2dd9babd73c57c3" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.367938 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qlkrc"] Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368185 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368200 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368211 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368217 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368228 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368234 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368242 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368248 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368258 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368264 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368272 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368278 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368287 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368292 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368305 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368313 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368320 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368326 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368335 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368342 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368353 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368360 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="extract-utilities" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368371 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368376 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: E1123 00:13:10.368385 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368390 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="extract-content" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368499 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="963d8537-d384-4feb-a776-da74096c0884" containerName="marketplace-operator" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368511 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368520 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368529 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.368536 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" containerName="registry-server" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.369331 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.371760 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.381999 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlkrc"] Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.468679 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmml\" (UniqueName: \"kubernetes.io/projected/ff76a6bc-06e4-4da8-828c-57a37fa57681-kube-api-access-zcmml\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.468857 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-utilities\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.469181 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-catalog-content\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.571166 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-catalog-content\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.571325 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmml\" (UniqueName: \"kubernetes.io/projected/ff76a6bc-06e4-4da8-828c-57a37fa57681-kube-api-access-zcmml\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.571348 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-utilities\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.571941 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c555x"] Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.572245 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-utilities\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.572644 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-catalog-content\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.573663 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.580021 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.582572 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c555x"] Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.608026 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmml\" (UniqueName: \"kubernetes.io/projected/ff76a6bc-06e4-4da8-828c-57a37fa57681-kube-api-access-zcmml\") pod \"redhat-marketplace-qlkrc\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.672536 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0dbcfe-3399-4146-b367-97582fb884cc-utilities\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.672593 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0dbcfe-3399-4146-b367-97582fb884cc-catalog-content\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.673148 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w252l\" (UniqueName: \"kubernetes.io/projected/fb0dbcfe-3399-4146-b367-97582fb884cc-kube-api-access-w252l\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.698539 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.735636 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963d8537-d384-4feb-a776-da74096c0884" path="/var/lib/kubelet/pods/963d8537-d384-4feb-a776-da74096c0884/volumes" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.736465 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93593fa-0b89-4a93-8edf-32e2e6c3b1d7" path="/var/lib/kubelet/pods/a93593fa-0b89-4a93-8edf-32e2e6c3b1d7/volumes" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.737109 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42" path="/var/lib/kubelet/pods/b8ba4c32-40bf-4cdb-b7e3-cdf92ccbfc42/volumes" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.738152 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2dcacc3-4d6d-4979-9e22-7ea3b0b557da" path="/var/lib/kubelet/pods/d2dcacc3-4d6d-4979-9e22-7ea3b0b557da/volumes" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.738721 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65fc52f-316d-4e63-99f0-998c7fb04d89" path="/var/lib/kubelet/pods/d65fc52f-316d-4e63-99f0-998c7fb04d89/volumes" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.774732 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0dbcfe-3399-4146-b367-97582fb884cc-utilities\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.774796 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0dbcfe-3399-4146-b367-97582fb884cc-catalog-content\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.774881 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w252l\" (UniqueName: \"kubernetes.io/projected/fb0dbcfe-3399-4146-b367-97582fb884cc-kube-api-access-w252l\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.776108 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0dbcfe-3399-4146-b367-97582fb884cc-utilities\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.776384 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0dbcfe-3399-4146-b367-97582fb884cc-catalog-content\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.799552 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w252l\" (UniqueName: \"kubernetes.io/projected/fb0dbcfe-3399-4146-b367-97582fb884cc-kube-api-access-w252l\") pod \"redhat-operators-c555x\" (UID: \"fb0dbcfe-3399-4146-b367-97582fb884cc\") " pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:10 crc kubenswrapper[4743]: I1123 00:13:10.900217 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.120404 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlkrc"] Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.142886 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c555x"] Nov 23 00:13:11 crc kubenswrapper[4743]: W1123 00:13:11.148622 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb0dbcfe_3399_4146_b367_97582fb884cc.slice/crio-62c79498c909c343c731fceb3b0e8d606a59c81503a111831d17c034c2a04784 WatchSource:0}: Error finding container 62c79498c909c343c731fceb3b0e8d606a59c81503a111831d17c034c2a04784: Status 404 returned error can't find the container with id 62c79498c909c343c731fceb3b0e8d606a59c81503a111831d17c034c2a04784 Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.502137 4743 generic.go:334] "Generic (PLEG): container finished" podID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerID="b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8" exitCode=0 Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.502253 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlkrc" event={"ID":"ff76a6bc-06e4-4da8-828c-57a37fa57681","Type":"ContainerDied","Data":"b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8"} Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.502292 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlkrc" event={"ID":"ff76a6bc-06e4-4da8-828c-57a37fa57681","Type":"ContainerStarted","Data":"06655b97f1687dda8ab171db45ffad00c8b499803d90b5b686db380f1992cbcd"} Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.506373 4743 generic.go:334] "Generic (PLEG): container finished" podID="fb0dbcfe-3399-4146-b367-97582fb884cc" containerID="159817c81dce90712f9edb5a77e6347c1cb9447ffbdb223c6d67302461ec3bc2" exitCode=0 Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.506502 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c555x" event={"ID":"fb0dbcfe-3399-4146-b367-97582fb884cc","Type":"ContainerDied","Data":"159817c81dce90712f9edb5a77e6347c1cb9447ffbdb223c6d67302461ec3bc2"} Nov 23 00:13:11 crc kubenswrapper[4743]: I1123 00:13:11.506799 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c555x" event={"ID":"fb0dbcfe-3399-4146-b367-97582fb884cc","Type":"ContainerStarted","Data":"62c79498c909c343c731fceb3b0e8d606a59c81503a111831d17c034c2a04784"} Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.770932 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g26ff"] Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.775116 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.778002 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.781436 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g26ff"] Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.801847 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p69x\" (UniqueName: \"kubernetes.io/projected/25831da8-a752-4cf2-9154-8cc119484cdf-kube-api-access-7p69x\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.801892 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25831da8-a752-4cf2-9154-8cc119484cdf-utilities\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.801934 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25831da8-a752-4cf2-9154-8cc119484cdf-catalog-content\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.902927 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p69x\" (UniqueName: \"kubernetes.io/projected/25831da8-a752-4cf2-9154-8cc119484cdf-kube-api-access-7p69x\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.903199 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25831da8-a752-4cf2-9154-8cc119484cdf-utilities\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.903294 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25831da8-a752-4cf2-9154-8cc119484cdf-catalog-content\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.904965 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25831da8-a752-4cf2-9154-8cc119484cdf-catalog-content\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.905591 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25831da8-a752-4cf2-9154-8cc119484cdf-utilities\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.925786 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p69x\" (UniqueName: \"kubernetes.io/projected/25831da8-a752-4cf2-9154-8cc119484cdf-kube-api-access-7p69x\") pod \"certified-operators-g26ff\" (UID: \"25831da8-a752-4cf2-9154-8cc119484cdf\") " pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.974113 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kvts5"] Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.975164 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.979552 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 00:13:12 crc kubenswrapper[4743]: I1123 00:13:12.987168 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvts5"] Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.011057 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bf8b6-d36a-46bc-ba47-ea537ea03f87-utilities\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.011133 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8288j\" (UniqueName: \"kubernetes.io/projected/100bf8b6-d36a-46bc-ba47-ea537ea03f87-kube-api-access-8288j\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.011160 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bf8b6-d36a-46bc-ba47-ea537ea03f87-catalog-content\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.098768 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.112434 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bf8b6-d36a-46bc-ba47-ea537ea03f87-utilities\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.112567 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8288j\" (UniqueName: \"kubernetes.io/projected/100bf8b6-d36a-46bc-ba47-ea537ea03f87-kube-api-access-8288j\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.112599 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bf8b6-d36a-46bc-ba47-ea537ea03f87-catalog-content\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.113545 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100bf8b6-d36a-46bc-ba47-ea537ea03f87-utilities\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.113782 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100bf8b6-d36a-46bc-ba47-ea537ea03f87-catalog-content\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.144496 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8288j\" (UniqueName: \"kubernetes.io/projected/100bf8b6-d36a-46bc-ba47-ea537ea03f87-kube-api-access-8288j\") pod \"community-operators-kvts5\" (UID: \"100bf8b6-d36a-46bc-ba47-ea537ea03f87\") " pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.292132 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.500242 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g26ff"] Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.521956 4743 generic.go:334] "Generic (PLEG): container finished" podID="fb0dbcfe-3399-4146-b367-97582fb884cc" containerID="178eeb88fafe12642bc9287de34d7030f6c21399372fdffeaa21de7a849bb294" exitCode=0 Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.522145 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c555x" event={"ID":"fb0dbcfe-3399-4146-b367-97582fb884cc","Type":"ContainerDied","Data":"178eeb88fafe12642bc9287de34d7030f6c21399372fdffeaa21de7a849bb294"} Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.526323 4743 generic.go:334] "Generic (PLEG): container finished" podID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerID="207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564" exitCode=0 Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.526681 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlkrc" event={"ID":"ff76a6bc-06e4-4da8-828c-57a37fa57681","Type":"ContainerDied","Data":"207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564"} Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.529798 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g26ff" event={"ID":"25831da8-a752-4cf2-9154-8cc119484cdf","Type":"ContainerStarted","Data":"d98cb8794b41ab0610ea4f69afc8d438db35c1b64d4c917ac85b47f9c7785877"} Nov 23 00:13:13 crc kubenswrapper[4743]: I1123 00:13:13.690783 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvts5"] Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.537071 4743 generic.go:334] "Generic (PLEG): container finished" podID="25831da8-a752-4cf2-9154-8cc119484cdf" containerID="af5a9643618cee2c8a51fb627111d6cc5ec11979b541879a5e2789014befd521" exitCode=0 Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.537147 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g26ff" event={"ID":"25831da8-a752-4cf2-9154-8cc119484cdf","Type":"ContainerDied","Data":"af5a9643618cee2c8a51fb627111d6cc5ec11979b541879a5e2789014befd521"} Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.543388 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c555x" event={"ID":"fb0dbcfe-3399-4146-b367-97582fb884cc","Type":"ContainerStarted","Data":"2e4074b4bf4d70b56e4d35ed764701441ffe1488979e7797bb12609bbb99b52c"} Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.546308 4743 generic.go:334] "Generic (PLEG): container finished" podID="100bf8b6-d36a-46bc-ba47-ea537ea03f87" containerID="cc42191f3fed1bce0d24bf01f444bb69ab2bb71d9db310ff5f5fbd0d1a429a56" exitCode=0 Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.546412 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvts5" event={"ID":"100bf8b6-d36a-46bc-ba47-ea537ea03f87","Type":"ContainerDied","Data":"cc42191f3fed1bce0d24bf01f444bb69ab2bb71d9db310ff5f5fbd0d1a429a56"} Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.546446 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvts5" event={"ID":"100bf8b6-d36a-46bc-ba47-ea537ea03f87","Type":"ContainerStarted","Data":"4e0242a22facd1fe4d217ce3f02ddf8061c912f03ec556eef53e971e9681f7f3"} Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.549226 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlkrc" event={"ID":"ff76a6bc-06e4-4da8-828c-57a37fa57681","Type":"ContainerStarted","Data":"fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7"} Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.608801 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qlkrc" podStartSLOduration=2.045221242 podStartE2EDuration="4.608776507s" podCreationTimestamp="2025-11-23 00:13:10 +0000 UTC" firstStartedPulling="2025-11-23 00:13:11.506151794 +0000 UTC m=+383.584249931" lastFinishedPulling="2025-11-23 00:13:14.069707069 +0000 UTC m=+386.147805196" observedRunningTime="2025-11-23 00:13:14.584021809 +0000 UTC m=+386.662119946" watchObservedRunningTime="2025-11-23 00:13:14.608776507 +0000 UTC m=+386.686874634" Nov 23 00:13:14 crc kubenswrapper[4743]: I1123 00:13:14.609544 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c555x" podStartSLOduration=2.183461343 podStartE2EDuration="4.609533496s" podCreationTimestamp="2025-11-23 00:13:10 +0000 UTC" firstStartedPulling="2025-11-23 00:13:11.508871192 +0000 UTC m=+383.586969319" lastFinishedPulling="2025-11-23 00:13:13.934943345 +0000 UTC m=+386.013041472" observedRunningTime="2025-11-23 00:13:14.604710636 +0000 UTC m=+386.682808783" watchObservedRunningTime="2025-11-23 00:13:14.609533496 +0000 UTC m=+386.687631633" Nov 23 00:13:16 crc kubenswrapper[4743]: I1123 00:13:16.570979 4743 generic.go:334] "Generic (PLEG): container finished" podID="100bf8b6-d36a-46bc-ba47-ea537ea03f87" containerID="4eef3f65c4d0ca0f03c89c0c3ea8e7ff6225d46f83b8ab44623ef18d9fccd77f" exitCode=0 Nov 23 00:13:16 crc kubenswrapper[4743]: I1123 00:13:16.572068 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvts5" event={"ID":"100bf8b6-d36a-46bc-ba47-ea537ea03f87","Type":"ContainerDied","Data":"4eef3f65c4d0ca0f03c89c0c3ea8e7ff6225d46f83b8ab44623ef18d9fccd77f"} Nov 23 00:13:16 crc kubenswrapper[4743]: I1123 00:13:16.575093 4743 generic.go:334] "Generic (PLEG): container finished" podID="25831da8-a752-4cf2-9154-8cc119484cdf" containerID="6e98f7c052dda8e9236e650aaa96d25ef7ef8aebfbdc0f0ee54bfb03e6e77a97" exitCode=0 Nov 23 00:13:16 crc kubenswrapper[4743]: I1123 00:13:16.575179 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g26ff" event={"ID":"25831da8-a752-4cf2-9154-8cc119484cdf","Type":"ContainerDied","Data":"6e98f7c052dda8e9236e650aaa96d25ef7ef8aebfbdc0f0ee54bfb03e6e77a97"} Nov 23 00:13:18 crc kubenswrapper[4743]: I1123 00:13:18.591621 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g26ff" event={"ID":"25831da8-a752-4cf2-9154-8cc119484cdf","Type":"ContainerStarted","Data":"6832fe1b29b97549bfc3d94ec24483099e0400559d87822055c165e87af8641a"} Nov 23 00:13:18 crc kubenswrapper[4743]: I1123 00:13:18.594818 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvts5" event={"ID":"100bf8b6-d36a-46bc-ba47-ea537ea03f87","Type":"ContainerStarted","Data":"cd573c446e09083674c772a585e2a004dcc3f88c02c74e415ba20487032f9197"} Nov 23 00:13:18 crc kubenswrapper[4743]: I1123 00:13:18.637340 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kvts5" podStartSLOduration=4.22274949 podStartE2EDuration="6.637319998s" podCreationTimestamp="2025-11-23 00:13:12 +0000 UTC" firstStartedPulling="2025-11-23 00:13:14.549323233 +0000 UTC m=+386.627421360" lastFinishedPulling="2025-11-23 00:13:16.963893741 +0000 UTC m=+389.041991868" observedRunningTime="2025-11-23 00:13:18.634446676 +0000 UTC m=+390.712544873" watchObservedRunningTime="2025-11-23 00:13:18.637319998 +0000 UTC m=+390.715418125" Nov 23 00:13:18 crc kubenswrapper[4743]: I1123 00:13:18.637940 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g26ff" podStartSLOduration=4.070806296 podStartE2EDuration="6.637933233s" podCreationTimestamp="2025-11-23 00:13:12 +0000 UTC" firstStartedPulling="2025-11-23 00:13:14.539110428 +0000 UTC m=+386.617208555" lastFinishedPulling="2025-11-23 00:13:17.106237365 +0000 UTC m=+389.184335492" observedRunningTime="2025-11-23 00:13:18.617074572 +0000 UTC m=+390.695172699" watchObservedRunningTime="2025-11-23 00:13:18.637933233 +0000 UTC m=+390.716031360" Nov 23 00:13:20 crc kubenswrapper[4743]: I1123 00:13:20.699828 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:20 crc kubenswrapper[4743]: I1123 00:13:20.700912 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:20 crc kubenswrapper[4743]: I1123 00:13:20.765415 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:20 crc kubenswrapper[4743]: I1123 00:13:20.901989 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:20 crc kubenswrapper[4743]: I1123 00:13:20.902056 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:20 crc kubenswrapper[4743]: I1123 00:13:20.944133 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:21 crc kubenswrapper[4743]: I1123 00:13:21.666639 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:13:21 crc kubenswrapper[4743]: I1123 00:13:21.682595 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c555x" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.098954 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.099396 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.154865 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.292343 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.292424 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.336782 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.682339 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kvts5" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.686465 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g26ff" Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.690246 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:13:23 crc kubenswrapper[4743]: I1123 00:13:23.690317 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:13:34 crc kubenswrapper[4743]: I1123 00:13:34.751558 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerName="oauth-openshift" containerID="cri-o://cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0" gracePeriod=15 Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.166337 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.210984 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9"] Nov 23 00:13:35 crc kubenswrapper[4743]: E1123 00:13:35.211242 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerName="oauth-openshift" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.211255 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerName="oauth-openshift" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.211343 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerName="oauth-openshift" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.211786 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.252726 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9"] Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340123 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-router-certs\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340175 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-serving-cert\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340200 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-provider-selection\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340237 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-session\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340273 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-service-ca\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340337 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-cliconfig\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340363 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-error\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340402 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-policies\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340438 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-login\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340466 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-trusted-ca-bundle\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340515 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-dir\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340601 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rqq5\" (UniqueName: \"kubernetes.io/projected/c9260cd3-3e10-47fe-b6f9-806bc90621fd-kube-api-access-4rqq5\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340654 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-ocp-branding-template\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340682 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-idp-0-file-data\") pod \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\" (UID: \"c9260cd3-3e10-47fe-b6f9-806bc90621fd\") " Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340872 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340904 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-login\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340935 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c9qn\" (UniqueName: \"kubernetes.io/projected/749baec2-c45a-419c-9afc-3487212c0d50-kube-api-access-8c9qn\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.340963 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341004 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341034 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-error\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341063 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341090 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341115 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/749baec2-c45a-419c-9afc-3487212c0d50-audit-dir\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341141 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341174 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341204 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-session\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341228 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-audit-policies\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.341248 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.343036 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.343423 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.343581 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.343919 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.344253 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.348838 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.354683 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9260cd3-3e10-47fe-b6f9-806bc90621fd-kube-api-access-4rqq5" (OuterVolumeSpecName: "kube-api-access-4rqq5") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "kube-api-access-4rqq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.354828 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.355070 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.355294 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.355505 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.356844 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.357670 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.358530 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c9260cd3-3e10-47fe-b6f9-806bc90621fd" (UID: "c9260cd3-3e10-47fe-b6f9-806bc90621fd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442069 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442144 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-session\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442170 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-audit-policies\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442198 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442220 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442238 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-login\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442265 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c9qn\" (UniqueName: \"kubernetes.io/projected/749baec2-c45a-419c-9afc-3487212c0d50-kube-api-access-8c9qn\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442282 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442307 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442326 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-error\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442345 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442364 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442382 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/749baec2-c45a-419c-9afc-3487212c0d50-audit-dir\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442402 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442456 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442470 4743 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442498 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rqq5\" (UniqueName: \"kubernetes.io/projected/c9260cd3-3e10-47fe-b6f9-806bc90621fd-kube-api-access-4rqq5\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442520 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442530 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442541 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442552 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442565 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442576 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442585 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442594 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442603 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442613 4743 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9260cd3-3e10-47fe-b6f9-806bc90621fd-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.442622 4743 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9260cd3-3e10-47fe-b6f9-806bc90621fd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.444178 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-audit-policies\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.444945 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.445174 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.445821 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/749baec2-c45a-419c-9afc-3487212c0d50-audit-dir\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.446146 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-error\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.446961 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-session\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.447328 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.447527 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.448416 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.448624 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.449729 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.450244 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.452464 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/749baec2-c45a-419c-9afc-3487212c0d50-v4-0-config-user-template-login\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.459815 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c9qn\" (UniqueName: \"kubernetes.io/projected/749baec2-c45a-419c-9afc-3487212c0d50-kube-api-access-8c9qn\") pod \"oauth-openshift-7cd8f88d7f-rw4m9\" (UID: \"749baec2-c45a-419c-9afc-3487212c0d50\") " pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.558920 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.709252 4743 generic.go:334] "Generic (PLEG): container finished" podID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" containerID="cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0" exitCode=0 Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.709326 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" event={"ID":"c9260cd3-3e10-47fe-b6f9-806bc90621fd","Type":"ContainerDied","Data":"cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0"} Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.709369 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" event={"ID":"c9260cd3-3e10-47fe-b6f9-806bc90621fd","Type":"ContainerDied","Data":"476f08fc9a02bc558543ca07eb2a3dde8733ea3a90adfb540ee3cf14115689af"} Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.709378 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgddj" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.709400 4743 scope.go:117] "RemoveContainer" containerID="cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.752992 4743 scope.go:117] "RemoveContainer" containerID="cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.761778 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgddj"] Nov 23 00:13:35 crc kubenswrapper[4743]: E1123 00:13:35.763897 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0\": container with ID starting with cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0 not found: ID does not exist" containerID="cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.763939 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0"} err="failed to get container status \"cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0\": rpc error: code = NotFound desc = could not find container \"cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0\": container with ID starting with cb21a287317ba14334cf63f5eff32df3a69548b67d2a894cc8cd57e25507b0e0 not found: ID does not exist" Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.780175 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgddj"] Nov 23 00:13:35 crc kubenswrapper[4743]: I1123 00:13:35.809595 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9"] Nov 23 00:13:35 crc kubenswrapper[4743]: W1123 00:13:35.825204 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749baec2_c45a_419c_9afc_3487212c0d50.slice/crio-93a1219b1312241fe193b4b8329f1631990c9f8d6cd24f51830f700fb08453e6 WatchSource:0}: Error finding container 93a1219b1312241fe193b4b8329f1631990c9f8d6cd24f51830f700fb08453e6: Status 404 returned error can't find the container with id 93a1219b1312241fe193b4b8329f1631990c9f8d6cd24f51830f700fb08453e6 Nov 23 00:13:36 crc kubenswrapper[4743]: I1123 00:13:36.733602 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9260cd3-3e10-47fe-b6f9-806bc90621fd" path="/var/lib/kubelet/pods/c9260cd3-3e10-47fe-b6f9-806bc90621fd/volumes" Nov 23 00:13:36 crc kubenswrapper[4743]: I1123 00:13:36.734632 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" event={"ID":"749baec2-c45a-419c-9afc-3487212c0d50","Type":"ContainerStarted","Data":"a576337922bb066a7438cf7190e1acaec25a217673a3abd1963c2c35b74b25ab"} Nov 23 00:13:36 crc kubenswrapper[4743]: I1123 00:13:36.734678 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:36 crc kubenswrapper[4743]: I1123 00:13:36.734693 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" event={"ID":"749baec2-c45a-419c-9afc-3487212c0d50","Type":"ContainerStarted","Data":"93a1219b1312241fe193b4b8329f1631990c9f8d6cd24f51830f700fb08453e6"} Nov 23 00:13:36 crc kubenswrapper[4743]: I1123 00:13:36.736298 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" Nov 23 00:13:36 crc kubenswrapper[4743]: I1123 00:13:36.773713 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cd8f88d7f-rw4m9" podStartSLOduration=27.773680453 podStartE2EDuration="27.773680453s" podCreationTimestamp="2025-11-23 00:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:13:36.75996126 +0000 UTC m=+408.838059457" watchObservedRunningTime="2025-11-23 00:13:36.773680453 +0000 UTC m=+408.851778620" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.790301 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z94b8"] Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.792465 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.809181 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z94b8"] Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929341 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929462 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnpbr\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-kube-api-access-rnpbr\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929522 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-trusted-ca\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929561 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929624 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-bound-sa-token\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929686 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-registry-certificates\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929860 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.929953 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-registry-tls\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:40 crc kubenswrapper[4743]: I1123 00:13:40.965907 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031363 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-bound-sa-token\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031415 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-registry-certificates\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031447 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031472 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-registry-tls\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031536 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnpbr\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-kube-api-access-rnpbr\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031558 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-trusted-ca\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.031591 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.032191 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.033528 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-trusted-ca\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.033870 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-registry-certificates\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.042427 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-registry-tls\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.044331 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.050928 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-bound-sa-token\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.053887 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnpbr\" (UniqueName: \"kubernetes.io/projected/38ef9ad8-ad5b-4d88-a761-7b35e7481b79-kube-api-access-rnpbr\") pod \"image-registry-66df7c8f76-z94b8\" (UID: \"38ef9ad8-ad5b-4d88-a761-7b35e7481b79\") " pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.111328 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.631394 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z94b8"] Nov 23 00:13:41 crc kubenswrapper[4743]: W1123 00:13:41.644225 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38ef9ad8_ad5b_4d88_a761_7b35e7481b79.slice/crio-138c34c3fa2efa5b5b469e760905525fbe43929248fbf377bbadf2a2cd37d198 WatchSource:0}: Error finding container 138c34c3fa2efa5b5b469e760905525fbe43929248fbf377bbadf2a2cd37d198: Status 404 returned error can't find the container with id 138c34c3fa2efa5b5b469e760905525fbe43929248fbf377bbadf2a2cd37d198 Nov 23 00:13:41 crc kubenswrapper[4743]: I1123 00:13:41.762450 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" event={"ID":"38ef9ad8-ad5b-4d88-a761-7b35e7481b79","Type":"ContainerStarted","Data":"138c34c3fa2efa5b5b469e760905525fbe43929248fbf377bbadf2a2cd37d198"} Nov 23 00:13:42 crc kubenswrapper[4743]: I1123 00:13:42.770899 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" event={"ID":"38ef9ad8-ad5b-4d88-a761-7b35e7481b79","Type":"ContainerStarted","Data":"b494465164b31aa8d4a6e89acece5aefc67d0beb7fbd85f36ea1e4efd000081a"} Nov 23 00:13:42 crc kubenswrapper[4743]: I1123 00:13:42.771121 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:13:42 crc kubenswrapper[4743]: I1123 00:13:42.802165 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" podStartSLOduration=2.802138709 podStartE2EDuration="2.802138709s" podCreationTimestamp="2025-11-23 00:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:13:42.80137607 +0000 UTC m=+414.879474277" watchObservedRunningTime="2025-11-23 00:13:42.802138709 +0000 UTC m=+414.880236876" Nov 23 00:13:53 crc kubenswrapper[4743]: I1123 00:13:53.690223 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:13:53 crc kubenswrapper[4743]: I1123 00:13:53.690908 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:13:53 crc kubenswrapper[4743]: I1123 00:13:53.691021 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:13:53 crc kubenswrapper[4743]: I1123 00:13:53.696171 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2b13ebe17552f951faf90e351bc90e649033afac2af3f67154656c929f99c99"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:13:53 crc kubenswrapper[4743]: I1123 00:13:53.697419 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://f2b13ebe17552f951faf90e351bc90e649033afac2af3f67154656c929f99c99" gracePeriod=600 Nov 23 00:13:54 crc kubenswrapper[4743]: I1123 00:13:54.874421 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="f2b13ebe17552f951faf90e351bc90e649033afac2af3f67154656c929f99c99" exitCode=0 Nov 23 00:13:54 crc kubenswrapper[4743]: I1123 00:13:54.874897 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"f2b13ebe17552f951faf90e351bc90e649033afac2af3f67154656c929f99c99"} Nov 23 00:13:54 crc kubenswrapper[4743]: I1123 00:13:54.874931 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"ac4a531f9521e82f7c0f94fe0a679c468fadebb72ad0795bf5932aa8b3bb78e4"} Nov 23 00:13:54 crc kubenswrapper[4743]: I1123 00:13:54.874963 4743 scope.go:117] "RemoveContainer" containerID="a941c5d3e80b264afe337605f5b02c4db50053e3ce24cba009e9b1298e5d94b4" Nov 23 00:14:01 crc kubenswrapper[4743]: I1123 00:14:01.122904 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-z94b8" Nov 23 00:14:01 crc kubenswrapper[4743]: I1123 00:14:01.195372 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t4jq5"] Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.237937 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" podUID="87578262-f89f-4b5c-92ab-a94000397e31" containerName="registry" containerID="cri-o://0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694" gracePeriod=30 Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.717079 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.825874 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-registry-tls\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.825984 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blqmt\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-kube-api-access-blqmt\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.826070 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-bound-sa-token\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.826156 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-trusted-ca\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.826213 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87578262-f89f-4b5c-92ab-a94000397e31-ca-trust-extracted\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.826676 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.826762 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87578262-f89f-4b5c-92ab-a94000397e31-installation-pull-secrets\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.826821 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-registry-certificates\") pod \"87578262-f89f-4b5c-92ab-a94000397e31\" (UID: \"87578262-f89f-4b5c-92ab-a94000397e31\") " Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.828547 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.829941 4743 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.829018 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.836985 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87578262-f89f-4b5c-92ab-a94000397e31-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.837728 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.842447 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-kube-api-access-blqmt" (OuterVolumeSpecName: "kube-api-access-blqmt") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "kube-api-access-blqmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.843974 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.844764 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.876738 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87578262-f89f-4b5c-92ab-a94000397e31-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "87578262-f89f-4b5c-92ab-a94000397e31" (UID: "87578262-f89f-4b5c-92ab-a94000397e31"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.931897 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blqmt\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-kube-api-access-blqmt\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.931957 4743 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.931971 4743 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87578262-f89f-4b5c-92ab-a94000397e31-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.931981 4743 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87578262-f89f-4b5c-92ab-a94000397e31-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.931991 4743 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87578262-f89f-4b5c-92ab-a94000397e31-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:26 crc kubenswrapper[4743]: I1123 00:14:26.932005 4743 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87578262-f89f-4b5c-92ab-a94000397e31-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.124049 4743 generic.go:334] "Generic (PLEG): container finished" podID="87578262-f89f-4b5c-92ab-a94000397e31" containerID="0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694" exitCode=0 Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.124166 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.124153 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" event={"ID":"87578262-f89f-4b5c-92ab-a94000397e31","Type":"ContainerDied","Data":"0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694"} Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.125785 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t4jq5" event={"ID":"87578262-f89f-4b5c-92ab-a94000397e31","Type":"ContainerDied","Data":"78b9d1ccbb687865b827ff8e12fe054a745d1d85cdc2f12938d83361842697e0"} Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.125825 4743 scope.go:117] "RemoveContainer" containerID="0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694" Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.165164 4743 scope.go:117] "RemoveContainer" containerID="0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694" Nov 23 00:14:27 crc kubenswrapper[4743]: E1123 00:14:27.165802 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694\": container with ID starting with 0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694 not found: ID does not exist" containerID="0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694" Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.165883 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694"} err="failed to get container status \"0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694\": rpc error: code = NotFound desc = could not find container \"0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694\": container with ID starting with 0f94ed88e0763c0da1ed8d6377c56f91712f1f1d12fac07d03c859ed4b693694 not found: ID does not exist" Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.179746 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t4jq5"] Nov 23 00:14:27 crc kubenswrapper[4743]: I1123 00:14:27.194043 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t4jq5"] Nov 23 00:14:28 crc kubenswrapper[4743]: I1123 00:14:28.736144 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87578262-f89f-4b5c-92ab-a94000397e31" path="/var/lib/kubelet/pods/87578262-f89f-4b5c-92ab-a94000397e31/volumes" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.157273 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47"] Nov 23 00:15:00 crc kubenswrapper[4743]: E1123 00:15:00.158122 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87578262-f89f-4b5c-92ab-a94000397e31" containerName="registry" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.158144 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="87578262-f89f-4b5c-92ab-a94000397e31" containerName="registry" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.158287 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="87578262-f89f-4b5c-92ab-a94000397e31" containerName="registry" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.158822 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.161615 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.162084 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.181633 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47"] Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.215702 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3346ceb4-7eb8-48f1-be06-28071a292da5-config-volume\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.215784 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mtp\" (UniqueName: \"kubernetes.io/projected/3346ceb4-7eb8-48f1-be06-28071a292da5-kube-api-access-v9mtp\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.215829 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3346ceb4-7eb8-48f1-be06-28071a292da5-secret-volume\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.316973 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3346ceb4-7eb8-48f1-be06-28071a292da5-secret-volume\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.317040 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3346ceb4-7eb8-48f1-be06-28071a292da5-config-volume\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.317078 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mtp\" (UniqueName: \"kubernetes.io/projected/3346ceb4-7eb8-48f1-be06-28071a292da5-kube-api-access-v9mtp\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.318007 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3346ceb4-7eb8-48f1-be06-28071a292da5-config-volume\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.323688 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3346ceb4-7eb8-48f1-be06-28071a292da5-secret-volume\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.333627 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mtp\" (UniqueName: \"kubernetes.io/projected/3346ceb4-7eb8-48f1-be06-28071a292da5-kube-api-access-v9mtp\") pod \"collect-profiles-29397615-98q47\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.522776 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:00 crc kubenswrapper[4743]: I1123 00:15:00.735452 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47"] Nov 23 00:15:00 crc kubenswrapper[4743]: W1123 00:15:00.751702 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3346ceb4_7eb8_48f1_be06_28071a292da5.slice/crio-66469052f692384f66b691b750543fd387e6464cf29259736e53b3a1e8cc0791 WatchSource:0}: Error finding container 66469052f692384f66b691b750543fd387e6464cf29259736e53b3a1e8cc0791: Status 404 returned error can't find the container with id 66469052f692384f66b691b750543fd387e6464cf29259736e53b3a1e8cc0791 Nov 23 00:15:01 crc kubenswrapper[4743]: I1123 00:15:01.400361 4743 generic.go:334] "Generic (PLEG): container finished" podID="3346ceb4-7eb8-48f1-be06-28071a292da5" containerID="ce533d5840776eecab9bad2f421449859e6d203ea89492c45dc95734167d3b96" exitCode=0 Nov 23 00:15:01 crc kubenswrapper[4743]: I1123 00:15:01.400679 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" event={"ID":"3346ceb4-7eb8-48f1-be06-28071a292da5","Type":"ContainerDied","Data":"ce533d5840776eecab9bad2f421449859e6d203ea89492c45dc95734167d3b96"} Nov 23 00:15:01 crc kubenswrapper[4743]: I1123 00:15:01.400961 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" event={"ID":"3346ceb4-7eb8-48f1-be06-28071a292da5","Type":"ContainerStarted","Data":"66469052f692384f66b691b750543fd387e6464cf29259736e53b3a1e8cc0791"} Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.748042 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.751771 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9mtp\" (UniqueName: \"kubernetes.io/projected/3346ceb4-7eb8-48f1-be06-28071a292da5-kube-api-access-v9mtp\") pod \"3346ceb4-7eb8-48f1-be06-28071a292da5\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.752046 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3346ceb4-7eb8-48f1-be06-28071a292da5-secret-volume\") pod \"3346ceb4-7eb8-48f1-be06-28071a292da5\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.752139 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3346ceb4-7eb8-48f1-be06-28071a292da5-config-volume\") pod \"3346ceb4-7eb8-48f1-be06-28071a292da5\" (UID: \"3346ceb4-7eb8-48f1-be06-28071a292da5\") " Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.752756 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3346ceb4-7eb8-48f1-be06-28071a292da5-config-volume" (OuterVolumeSpecName: "config-volume") pod "3346ceb4-7eb8-48f1-be06-28071a292da5" (UID: "3346ceb4-7eb8-48f1-be06-28071a292da5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.758970 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3346ceb4-7eb8-48f1-be06-28071a292da5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3346ceb4-7eb8-48f1-be06-28071a292da5" (UID: "3346ceb4-7eb8-48f1-be06-28071a292da5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.762723 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3346ceb4-7eb8-48f1-be06-28071a292da5-kube-api-access-v9mtp" (OuterVolumeSpecName: "kube-api-access-v9mtp") pod "3346ceb4-7eb8-48f1-be06-28071a292da5" (UID: "3346ceb4-7eb8-48f1-be06-28071a292da5"). InnerVolumeSpecName "kube-api-access-v9mtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.854260 4743 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3346ceb4-7eb8-48f1-be06-28071a292da5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.854306 4743 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3346ceb4-7eb8-48f1-be06-28071a292da5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:15:02 crc kubenswrapper[4743]: I1123 00:15:02.854321 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9mtp\" (UniqueName: \"kubernetes.io/projected/3346ceb4-7eb8-48f1-be06-28071a292da5-kube-api-access-v9mtp\") on node \"crc\" DevicePath \"\"" Nov 23 00:15:03 crc kubenswrapper[4743]: I1123 00:15:03.414454 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" event={"ID":"3346ceb4-7eb8-48f1-be06-28071a292da5","Type":"ContainerDied","Data":"66469052f692384f66b691b750543fd387e6464cf29259736e53b3a1e8cc0791"} Nov 23 00:15:03 crc kubenswrapper[4743]: I1123 00:15:03.414852 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66469052f692384f66b691b750543fd387e6464cf29259736e53b3a1e8cc0791" Nov 23 00:15:03 crc kubenswrapper[4743]: I1123 00:15:03.414535 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397615-98q47" Nov 23 00:16:23 crc kubenswrapper[4743]: I1123 00:16:23.690026 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:16:23 crc kubenswrapper[4743]: I1123 00:16:23.690618 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:16:53 crc kubenswrapper[4743]: I1123 00:16:53.690756 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:16:53 crc kubenswrapper[4743]: I1123 00:16:53.692431 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:17:23 crc kubenswrapper[4743]: I1123 00:17:23.690189 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:17:23 crc kubenswrapper[4743]: I1123 00:17:23.691010 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:17:23 crc kubenswrapper[4743]: I1123 00:17:23.691069 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:17:23 crc kubenswrapper[4743]: I1123 00:17:23.691873 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac4a531f9521e82f7c0f94fe0a679c468fadebb72ad0795bf5932aa8b3bb78e4"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:17:23 crc kubenswrapper[4743]: I1123 00:17:23.691965 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://ac4a531f9521e82f7c0f94fe0a679c468fadebb72ad0795bf5932aa8b3bb78e4" gracePeriod=600 Nov 23 00:17:24 crc kubenswrapper[4743]: I1123 00:17:24.274262 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="ac4a531f9521e82f7c0f94fe0a679c468fadebb72ad0795bf5932aa8b3bb78e4" exitCode=0 Nov 23 00:17:24 crc kubenswrapper[4743]: I1123 00:17:24.274374 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"ac4a531f9521e82f7c0f94fe0a679c468fadebb72ad0795bf5932aa8b3bb78e4"} Nov 23 00:17:24 crc kubenswrapper[4743]: I1123 00:17:24.274748 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"052e275822d2ee2fb1b2c9a5a7391cecc2e3d47d664aaa005c530fa35f4013d9"} Nov 23 00:17:24 crc kubenswrapper[4743]: I1123 00:17:24.274765 4743 scope.go:117] "RemoveContainer" containerID="f2b13ebe17552f951faf90e351bc90e649033afac2af3f67154656c929f99c99" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.519995 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v64gz"] Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521113 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-controller" containerID="cri-o://496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521193 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="nbdb" containerID="cri-o://c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521246 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="northd" containerID="cri-o://cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521300 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521339 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-node" containerID="cri-o://eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521343 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-acl-logging" containerID="cri-o://6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.521960 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="sbdb" containerID="cri-o://735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.564345 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" containerID="cri-o://49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" gracePeriod=30 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.667328 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/2.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.668725 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/1.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.668772 4743 generic.go:334] "Generic (PLEG): container finished" podID="b0418df6-be6b-459c-8685-770bc9c99a0e" containerID="bf998bc8e291a5c2248c56a257bd7070096af13d4ef62133ec4ae33e687b20dd" exitCode=2 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.668839 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerDied","Data":"bf998bc8e291a5c2248c56a257bd7070096af13d4ef62133ec4ae33e687b20dd"} Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.668875 4743 scope.go:117] "RemoveContainer" containerID="a835846b44ccab8752f8c3816ec24e09f1ee98f2478126e532c2ef38bdb0a44b" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.669341 4743 scope.go:117] "RemoveContainer" containerID="bf998bc8e291a5c2248c56a257bd7070096af13d4ef62133ec4ae33e687b20dd" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.669589 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zvknx_openshift-multus(b0418df6-be6b-459c-8685-770bc9c99a0e)\"" pod="openshift-multus/multus-zvknx" podUID="b0418df6-be6b-459c-8685-770bc9c99a0e" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.671509 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/3.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.674688 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovn-acl-logging/0.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.676321 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovn-controller/0.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.677893 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" exitCode=0 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.677927 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" exitCode=0 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.677960 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" exitCode=143 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.677969 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" exitCode=143 Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.677997 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed"} Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.678026 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b"} Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.678036 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7"} Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.678051 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7"} Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.855364 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/3.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.858538 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovn-acl-logging/0.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.859035 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovn-controller/0.log" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.859830 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.894193 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovn-node-metrics-cert\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.894719 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-slash\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.894856 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-kubelet\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.894990 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-ovn-kubernetes\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895127 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-node-log\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895263 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-env-overrides\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895378 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-var-lib-openvswitch\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895525 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-config\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895647 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-bin\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895768 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-netns\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895861 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-systemd-units\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895971 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-openvswitch\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896072 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srvps\" (UniqueName: \"kubernetes.io/projected/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-kube-api-access-srvps\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896193 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-log-socket\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.894774 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-slash" (OuterVolumeSpecName: "host-slash") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896331 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-script-lib\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896439 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-etc-openvswitch\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.894918 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895035 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895176 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-node-log" (OuterVolumeSpecName: "node-log") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895523 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895658 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895696 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895900 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.895987 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896033 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896563 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896244 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-log-socket" (OuterVolumeSpecName: "log-socket") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896201 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896538 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-ovn\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896625 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896655 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-systemd\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896686 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-var-lib-cni-networks-ovn-kubernetes\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.896703 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-netd\") pod \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\" (UID: \"94c14c61-ccab-4ff7-abcd-91276e4ba6ab\") " Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897078 4743 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-slash\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897093 4743 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897104 4743 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897113 4743 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-node-log\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897121 4743 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897132 4743 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897141 4743 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897151 4743 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897161 4743 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897170 4743 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897178 4743 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897187 4743 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-log-socket\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897195 4743 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897203 4743 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.897228 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.898455 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.898658 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.902294 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-kube-api-access-srvps" (OuterVolumeSpecName: "kube-api-access-srvps") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "kube-api-access-srvps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.907205 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911453 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "94c14c61-ccab-4ff7-abcd-91276e4ba6ab" (UID: "94c14c61-ccab-4ff7-abcd-91276e4ba6ab"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911522 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x977h"] Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911758 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-node" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911785 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-node" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911796 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911804 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911815 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911822 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911831 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911836 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911844 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911849 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911856 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911862 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911871 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911877 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911887 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="northd" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911893 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="northd" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911902 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kubecfg-setup" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911908 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kubecfg-setup" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911919 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="sbdb" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911927 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="sbdb" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911939 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3346ceb4-7eb8-48f1-be06-28071a292da5" containerName="collect-profiles" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911946 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="3346ceb4-7eb8-48f1-be06-28071a292da5" containerName="collect-profiles" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911956 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-acl-logging" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911962 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-acl-logging" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.911970 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="nbdb" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.911975 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="nbdb" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912058 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912069 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912079 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912085 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912093 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovn-acl-logging" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912101 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="3346ceb4-7eb8-48f1-be06-28071a292da5" containerName="collect-profiles" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912108 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912114 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="nbdb" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912121 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="northd" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912128 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="sbdb" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912136 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912142 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="kube-rbac-proxy-node" Nov 23 00:18:21 crc kubenswrapper[4743]: E1123 00:18:21.912226 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912232 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.912315 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerName="ovnkube-controller" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.913882 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.997851 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-var-lib-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.997897 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.997917 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-env-overrides\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.997966 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovn-node-metrics-cert\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998054 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-kubelet\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998125 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-run-netns\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998151 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-node-log\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998202 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-cni-netd\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998238 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-etc-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998307 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-slash\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998333 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-systemd\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998360 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-systemd-units\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998394 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-log-socket\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998436 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ghh\" (UniqueName: \"kubernetes.io/projected/95043df4-2bbc-4c6b-8f2c-308c4e202340-kube-api-access-g2ghh\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998521 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-cni-bin\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998566 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-ovn\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998610 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovnkube-config\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998632 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-run-ovn-kubernetes\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998668 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998719 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovnkube-script-lib\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998833 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srvps\" (UniqueName: \"kubernetes.io/projected/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-kube-api-access-srvps\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998848 4743 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998862 4743 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998873 4743 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998885 4743 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:21 crc kubenswrapper[4743]: I1123 00:18:21.998897 4743 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94c14c61-ccab-4ff7-abcd-91276e4ba6ab-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100046 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-slash\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100095 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-systemd\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100124 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-systemd-units\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100143 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-log-socket\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100168 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ghh\" (UniqueName: \"kubernetes.io/projected/95043df4-2bbc-4c6b-8f2c-308c4e202340-kube-api-access-g2ghh\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100172 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-slash\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100199 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-cni-bin\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100223 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-ovn\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100240 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-systemd-units\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100277 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-log-socket\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100252 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-systemd\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100252 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovnkube-config\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100314 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-cni-bin\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100344 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-run-ovn-kubernetes\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100354 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-ovn\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100371 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100389 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovnkube-script-lib\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100394 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-run-ovn-kubernetes\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100436 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-var-lib-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100465 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100505 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-env-overrides\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100576 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-run-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100696 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100702 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovn-node-metrics-cert\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100880 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-kubelet\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100928 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-kubelet\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101003 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-node-log\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100880 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-var-lib-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.100960 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-node-log\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101086 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-run-netns\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101114 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-cni-netd\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101143 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-etc-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101147 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-run-netns\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101191 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-etc-openvswitch\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101253 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95043df4-2bbc-4c6b-8f2c-308c4e202340-host-cni-netd\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101311 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovnkube-script-lib\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101423 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovnkube-config\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.101546 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95043df4-2bbc-4c6b-8f2c-308c4e202340-env-overrides\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.103447 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95043df4-2bbc-4c6b-8f2c-308c4e202340-ovn-node-metrics-cert\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.116243 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ghh\" (UniqueName: \"kubernetes.io/projected/95043df4-2bbc-4c6b-8f2c-308c4e202340-kube-api-access-g2ghh\") pod \"ovnkube-node-x977h\" (UID: \"95043df4-2bbc-4c6b-8f2c-308c4e202340\") " pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.235609 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.685509 4743 generic.go:334] "Generic (PLEG): container finished" podID="95043df4-2bbc-4c6b-8f2c-308c4e202340" containerID="8975f2f2aac79365cdac4ea9374dd8282b7dc871e0a6e7818345e6e8e7c46d0b" exitCode=0 Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.685634 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerDied","Data":"8975f2f2aac79365cdac4ea9374dd8282b7dc871e0a6e7818345e6e8e7c46d0b"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.685703 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"83600d85c64a7c28a34812f354d02557278bf6b9c8b05d9effc2e3f017453ce0"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.687829 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/2.log" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.692964 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovnkube-controller/3.log" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.695875 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovn-acl-logging/0.log" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.696438 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v64gz_94c14c61-ccab-4ff7-abcd-91276e4ba6ab/ovn-controller/0.log" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.696919 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" exitCode=0 Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.696953 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" exitCode=0 Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.696964 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" exitCode=0 Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.696976 4743 generic.go:334] "Generic (PLEG): container finished" podID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" exitCode=0 Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697004 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697030 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697044 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697053 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697065 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" event={"ID":"94c14c61-ccab-4ff7-abcd-91276e4ba6ab","Type":"ContainerDied","Data":"917e766fedb18fb0ed58a31f0f8be6095ccd2e66cc25cd909c4f7d6513b6159e"} Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697080 4743 scope.go:117] "RemoveContainer" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.697185 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v64gz" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.718722 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.751677 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v64gz"] Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.756849 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v64gz"] Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.769757 4743 scope.go:117] "RemoveContainer" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.799401 4743 scope.go:117] "RemoveContainer" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.817589 4743 scope.go:117] "RemoveContainer" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.832746 4743 scope.go:117] "RemoveContainer" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.846507 4743 scope.go:117] "RemoveContainer" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.869683 4743 scope.go:117] "RemoveContainer" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.903844 4743 scope.go:117] "RemoveContainer" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.946830 4743 scope.go:117] "RemoveContainer" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.962115 4743 scope.go:117] "RemoveContainer" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.962661 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": container with ID starting with 49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38 not found: ID does not exist" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.962714 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38"} err="failed to get container status \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": rpc error: code = NotFound desc = could not find container \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": container with ID starting with 49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.962751 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.963136 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": container with ID starting with 456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d not found: ID does not exist" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.963162 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d"} err="failed to get container status \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": rpc error: code = NotFound desc = could not find container \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": container with ID starting with 456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.963177 4743 scope.go:117] "RemoveContainer" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.963622 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": container with ID starting with 735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca not found: ID does not exist" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.963643 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca"} err="failed to get container status \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": rpc error: code = NotFound desc = could not find container \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": container with ID starting with 735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.963657 4743 scope.go:117] "RemoveContainer" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.963956 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": container with ID starting with c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0 not found: ID does not exist" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.963974 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0"} err="failed to get container status \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": rpc error: code = NotFound desc = could not find container \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": container with ID starting with c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.963990 4743 scope.go:117] "RemoveContainer" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.964373 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": container with ID starting with cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243 not found: ID does not exist" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.964394 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243"} err="failed to get container status \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": rpc error: code = NotFound desc = could not find container \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": container with ID starting with cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.964437 4743 scope.go:117] "RemoveContainer" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.964925 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": container with ID starting with 770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed not found: ID does not exist" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.964945 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed"} err="failed to get container status \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": rpc error: code = NotFound desc = could not find container \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": container with ID starting with 770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.964957 4743 scope.go:117] "RemoveContainer" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.965262 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": container with ID starting with eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b not found: ID does not exist" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.965282 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b"} err="failed to get container status \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": rpc error: code = NotFound desc = could not find container \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": container with ID starting with eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.965299 4743 scope.go:117] "RemoveContainer" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.965505 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": container with ID starting with 6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7 not found: ID does not exist" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.965528 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7"} err="failed to get container status \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": rpc error: code = NotFound desc = could not find container \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": container with ID starting with 6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.965543 4743 scope.go:117] "RemoveContainer" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.965830 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": container with ID starting with 496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7 not found: ID does not exist" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.965850 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7"} err="failed to get container status \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": rpc error: code = NotFound desc = could not find container \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": container with ID starting with 496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.965863 4743 scope.go:117] "RemoveContainer" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" Nov 23 00:18:22 crc kubenswrapper[4743]: E1123 00:18:22.966085 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": container with ID starting with 96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e not found: ID does not exist" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.966106 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e"} err="failed to get container status \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": rpc error: code = NotFound desc = could not find container \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": container with ID starting with 96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.966119 4743 scope.go:117] "RemoveContainer" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.966437 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38"} err="failed to get container status \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": rpc error: code = NotFound desc = could not find container \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": container with ID starting with 49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.966456 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.966764 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d"} err="failed to get container status \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": rpc error: code = NotFound desc = could not find container \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": container with ID starting with 456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.966801 4743 scope.go:117] "RemoveContainer" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.967164 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca"} err="failed to get container status \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": rpc error: code = NotFound desc = could not find container \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": container with ID starting with 735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.967183 4743 scope.go:117] "RemoveContainer" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.967468 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0"} err="failed to get container status \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": rpc error: code = NotFound desc = could not find container \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": container with ID starting with c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.967505 4743 scope.go:117] "RemoveContainer" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.967713 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243"} err="failed to get container status \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": rpc error: code = NotFound desc = could not find container \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": container with ID starting with cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.967731 4743 scope.go:117] "RemoveContainer" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968016 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed"} err="failed to get container status \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": rpc error: code = NotFound desc = could not find container \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": container with ID starting with 770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968034 4743 scope.go:117] "RemoveContainer" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968213 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b"} err="failed to get container status \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": rpc error: code = NotFound desc = could not find container \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": container with ID starting with eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968234 4743 scope.go:117] "RemoveContainer" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968557 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7"} err="failed to get container status \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": rpc error: code = NotFound desc = could not find container \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": container with ID starting with 6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968581 4743 scope.go:117] "RemoveContainer" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968808 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7"} err="failed to get container status \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": rpc error: code = NotFound desc = could not find container \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": container with ID starting with 496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.968827 4743 scope.go:117] "RemoveContainer" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969063 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e"} err="failed to get container status \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": rpc error: code = NotFound desc = could not find container \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": container with ID starting with 96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969084 4743 scope.go:117] "RemoveContainer" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969265 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38"} err="failed to get container status \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": rpc error: code = NotFound desc = could not find container \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": container with ID starting with 49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969281 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969615 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d"} err="failed to get container status \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": rpc error: code = NotFound desc = could not find container \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": container with ID starting with 456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969628 4743 scope.go:117] "RemoveContainer" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969791 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca"} err="failed to get container status \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": rpc error: code = NotFound desc = could not find container \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": container with ID starting with 735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969807 4743 scope.go:117] "RemoveContainer" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969977 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0"} err="failed to get container status \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": rpc error: code = NotFound desc = could not find container \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": container with ID starting with c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.969993 4743 scope.go:117] "RemoveContainer" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.970147 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243"} err="failed to get container status \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": rpc error: code = NotFound desc = could not find container \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": container with ID starting with cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.970162 4743 scope.go:117] "RemoveContainer" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.970429 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed"} err="failed to get container status \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": rpc error: code = NotFound desc = could not find container \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": container with ID starting with 770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.970445 4743 scope.go:117] "RemoveContainer" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.970745 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b"} err="failed to get container status \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": rpc error: code = NotFound desc = could not find container \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": container with ID starting with eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.970765 4743 scope.go:117] "RemoveContainer" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.971148 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7"} err="failed to get container status \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": rpc error: code = NotFound desc = could not find container \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": container with ID starting with 6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.971185 4743 scope.go:117] "RemoveContainer" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.971461 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7"} err="failed to get container status \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": rpc error: code = NotFound desc = could not find container \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": container with ID starting with 496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.971508 4743 scope.go:117] "RemoveContainer" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.971782 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e"} err="failed to get container status \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": rpc error: code = NotFound desc = could not find container \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": container with ID starting with 96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.971801 4743 scope.go:117] "RemoveContainer" containerID="49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972027 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38"} err="failed to get container status \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": rpc error: code = NotFound desc = could not find container \"49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38\": container with ID starting with 49af0616e361bd501239b9d779f80f6e435185fe68f1f26859915c1613a01c38 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972046 4743 scope.go:117] "RemoveContainer" containerID="456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972278 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d"} err="failed to get container status \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": rpc error: code = NotFound desc = could not find container \"456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d\": container with ID starting with 456c9d86267bc0c976e4a9bf64c64edfde2cc30b481fc13f69456cb47b1f452d not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972295 4743 scope.go:117] "RemoveContainer" containerID="735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972625 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca"} err="failed to get container status \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": rpc error: code = NotFound desc = could not find container \"735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca\": container with ID starting with 735b18a7942dc7c0b0eb0d4f1646df84a91040aa6c8f9e4615fd6e6b533fffca not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972673 4743 scope.go:117] "RemoveContainer" containerID="c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972878 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0"} err="failed to get container status \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": rpc error: code = NotFound desc = could not find container \"c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0\": container with ID starting with c81778492ffb7101d86159d4dcada232d0b3658f77defcc1f62aec1fed3ee8d0 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.972895 4743 scope.go:117] "RemoveContainer" containerID="cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973076 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243"} err="failed to get container status \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": rpc error: code = NotFound desc = could not find container \"cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243\": container with ID starting with cbd693a9529fc7e742af60e6c5c9d8e1fab4f3f1bf869746f735019e5c725243 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973092 4743 scope.go:117] "RemoveContainer" containerID="770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973286 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed"} err="failed to get container status \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": rpc error: code = NotFound desc = could not find container \"770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed\": container with ID starting with 770f0b5a65e272c9beb53085570f8b47370465c5e45a0ab758aae326e44b4eed not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973302 4743 scope.go:117] "RemoveContainer" containerID="eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973470 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b"} err="failed to get container status \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": rpc error: code = NotFound desc = could not find container \"eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b\": container with ID starting with eacc283a7ec47a9b491ad9101cae2f6e0e1848e442deffc2e6b7a3c5820ef42b not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973503 4743 scope.go:117] "RemoveContainer" containerID="6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973726 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7"} err="failed to get container status \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": rpc error: code = NotFound desc = could not find container \"6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7\": container with ID starting with 6d8533e04728bf809306664d121f51fda358bb78e32b3c1358529e6f2c005da7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973749 4743 scope.go:117] "RemoveContainer" containerID="496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973964 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7"} err="failed to get container status \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": rpc error: code = NotFound desc = could not find container \"496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7\": container with ID starting with 496357af1de66a1ed41d4ee10e0fece9f2e21da9a3554eb327e57821cccd9fb7 not found: ID does not exist" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.973985 4743 scope.go:117] "RemoveContainer" containerID="96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e" Nov 23 00:18:22 crc kubenswrapper[4743]: I1123 00:18:22.974858 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e"} err="failed to get container status \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": rpc error: code = NotFound desc = could not find container \"96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e\": container with ID starting with 96e641b45b7567a417cfe4f9df5c151117e98f02f183da1d93119a3bf60a996e not found: ID does not exist" Nov 23 00:18:23 crc kubenswrapper[4743]: I1123 00:18:23.710281 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"3de77c12cdcb12d66c5d7a42f7dcaf2743ea41c61f883679cb92ef2df9484dc0"} Nov 23 00:18:23 crc kubenswrapper[4743]: I1123 00:18:23.710684 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"7af34a36997df4f891faaccfb995811315dafae941ea9b8be658de99dcd391a1"} Nov 23 00:18:23 crc kubenswrapper[4743]: I1123 00:18:23.710720 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"a297e14166b653c4bdec5681acc65b7d6de9aecc5ddc6dfd6e02041f834d9c2c"} Nov 23 00:18:23 crc kubenswrapper[4743]: I1123 00:18:23.710745 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"94d5d0580bc89b45c6f9a6a9cf96364ca8a390cd41d843ba4a8675e6be742005"} Nov 23 00:18:23 crc kubenswrapper[4743]: I1123 00:18:23.710772 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"7ffcb93f4e7f448dac66b94f11f103bedf5eaf2e8901f5809b5a17918d3d2d3f"} Nov 23 00:18:23 crc kubenswrapper[4743]: I1123 00:18:23.710794 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"c0ff3ab3c5be0bb08395a7a50deb477825ad85fea5332f75980120a16bad67ee"} Nov 23 00:18:24 crc kubenswrapper[4743]: I1123 00:18:24.735311 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c14c61-ccab-4ff7-abcd-91276e4ba6ab" path="/var/lib/kubelet/pods/94c14c61-ccab-4ff7-abcd-91276e4ba6ab/volumes" Nov 23 00:18:26 crc kubenswrapper[4743]: I1123 00:18:26.737272 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"d3564ccb3970d82b8bb40de257a8036e44da7f28f06762dca1a0acd58095e99f"} Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.750512 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" event={"ID":"95043df4-2bbc-4c6b-8f2c-308c4e202340","Type":"ContainerStarted","Data":"e872e1cba21f0ae1a3b2afa43e583b6aa4d125bfb8e2a3d6f2345ccff9dd1122"} Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.750927 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.750983 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.751005 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.778115 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.780017 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:18:28 crc kubenswrapper[4743]: I1123 00:18:28.788224 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" podStartSLOduration=7.7882086820000005 podStartE2EDuration="7.788208682s" podCreationTimestamp="2025-11-23 00:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:18:28.783640511 +0000 UTC m=+700.861738678" watchObservedRunningTime="2025-11-23 00:18:28.788208682 +0000 UTC m=+700.866306819" Nov 23 00:18:36 crc kubenswrapper[4743]: I1123 00:18:36.722575 4743 scope.go:117] "RemoveContainer" containerID="bf998bc8e291a5c2248c56a257bd7070096af13d4ef62133ec4ae33e687b20dd" Nov 23 00:18:36 crc kubenswrapper[4743]: E1123 00:18:36.723685 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zvknx_openshift-multus(b0418df6-be6b-459c-8685-770bc9c99a0e)\"" pod="openshift-multus/multus-zvknx" podUID="b0418df6-be6b-459c-8685-770bc9c99a0e" Nov 23 00:18:49 crc kubenswrapper[4743]: I1123 00:18:49.722274 4743 scope.go:117] "RemoveContainer" containerID="bf998bc8e291a5c2248c56a257bd7070096af13d4ef62133ec4ae33e687b20dd" Nov 23 00:18:50 crc kubenswrapper[4743]: I1123 00:18:50.902779 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zvknx_b0418df6-be6b-459c-8685-770bc9c99a0e/kube-multus/2.log" Nov 23 00:18:50 crc kubenswrapper[4743]: I1123 00:18:50.903030 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zvknx" event={"ID":"b0418df6-be6b-459c-8685-770bc9c99a0e","Type":"ContainerStarted","Data":"b1a1485bb6555d3c12b3c5548f69b50bdb5067fe374a75d2f6cd14ba9f319ddd"} Nov 23 00:18:52 crc kubenswrapper[4743]: I1123 00:18:52.272432 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x977h" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.098601 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pqx6"] Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.100083 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerName="controller-manager" containerID="cri-o://fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80" gracePeriod=30 Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.186800 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52"] Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.186989 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerName="route-controller-manager" containerID="cri-o://bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239" gracePeriod=30 Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.477399 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.527866 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558381 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-config\") pod \"d39201fe-fa08-49ca-adec-15441d9cbaa5\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558428 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-config\") pod \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558542 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvnjf\" (UniqueName: \"kubernetes.io/projected/d39201fe-fa08-49ca-adec-15441d9cbaa5-kube-api-access-hvnjf\") pod \"d39201fe-fa08-49ca-adec-15441d9cbaa5\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558568 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e63d320-241c-4f1e-ace2-6b28a8d9d338-serving-cert\") pod \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558590 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-proxy-ca-bundles\") pod \"d39201fe-fa08-49ca-adec-15441d9cbaa5\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558642 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lp2v\" (UniqueName: \"kubernetes.io/projected/6e63d320-241c-4f1e-ace2-6b28a8d9d338-kube-api-access-2lp2v\") pod \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558662 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39201fe-fa08-49ca-adec-15441d9cbaa5-serving-cert\") pod \"d39201fe-fa08-49ca-adec-15441d9cbaa5\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558710 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-client-ca\") pod \"d39201fe-fa08-49ca-adec-15441d9cbaa5\" (UID: \"d39201fe-fa08-49ca-adec-15441d9cbaa5\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.558737 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-client-ca\") pod \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\" (UID: \"6e63d320-241c-4f1e-ace2-6b28a8d9d338\") " Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.559447 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e63d320-241c-4f1e-ace2-6b28a8d9d338" (UID: "6e63d320-241c-4f1e-ace2-6b28a8d9d338"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.559597 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-config" (OuterVolumeSpecName: "config") pod "6e63d320-241c-4f1e-ace2-6b28a8d9d338" (UID: "6e63d320-241c-4f1e-ace2-6b28a8d9d338"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.559664 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d39201fe-fa08-49ca-adec-15441d9cbaa5" (UID: "d39201fe-fa08-49ca-adec-15441d9cbaa5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.559745 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d39201fe-fa08-49ca-adec-15441d9cbaa5" (UID: "d39201fe-fa08-49ca-adec-15441d9cbaa5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.559834 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-config" (OuterVolumeSpecName: "config") pod "d39201fe-fa08-49ca-adec-15441d9cbaa5" (UID: "d39201fe-fa08-49ca-adec-15441d9cbaa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.565444 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e63d320-241c-4f1e-ace2-6b28a8d9d338-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e63d320-241c-4f1e-ace2-6b28a8d9d338" (UID: "6e63d320-241c-4f1e-ace2-6b28a8d9d338"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.565906 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e63d320-241c-4f1e-ace2-6b28a8d9d338-kube-api-access-2lp2v" (OuterVolumeSpecName: "kube-api-access-2lp2v") pod "6e63d320-241c-4f1e-ace2-6b28a8d9d338" (UID: "6e63d320-241c-4f1e-ace2-6b28a8d9d338"). InnerVolumeSpecName "kube-api-access-2lp2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.566028 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39201fe-fa08-49ca-adec-15441d9cbaa5-kube-api-access-hvnjf" (OuterVolumeSpecName: "kube-api-access-hvnjf") pod "d39201fe-fa08-49ca-adec-15441d9cbaa5" (UID: "d39201fe-fa08-49ca-adec-15441d9cbaa5"). InnerVolumeSpecName "kube-api-access-hvnjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.566071 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39201fe-fa08-49ca-adec-15441d9cbaa5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d39201fe-fa08-49ca-adec-15441d9cbaa5" (UID: "d39201fe-fa08-49ca-adec-15441d9cbaa5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660696 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660755 4743 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660769 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvnjf\" (UniqueName: \"kubernetes.io/projected/d39201fe-fa08-49ca-adec-15441d9cbaa5-kube-api-access-hvnjf\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660785 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e63d320-241c-4f1e-ace2-6b28a8d9d338-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660801 4743 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660817 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lp2v\" (UniqueName: \"kubernetes.io/projected/6e63d320-241c-4f1e-ace2-6b28a8d9d338-kube-api-access-2lp2v\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660830 4743 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39201fe-fa08-49ca-adec-15441d9cbaa5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660845 4743 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39201fe-fa08-49ca-adec-15441d9cbaa5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:20 crc kubenswrapper[4743]: I1123 00:19:20.660858 4743 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e63d320-241c-4f1e-ace2-6b28a8d9d338-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.113522 4743 generic.go:334] "Generic (PLEG): container finished" podID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerID="fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80" exitCode=0 Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.113830 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" event={"ID":"d39201fe-fa08-49ca-adec-15441d9cbaa5","Type":"ContainerDied","Data":"fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80"} Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.113864 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.113899 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pqx6" event={"ID":"d39201fe-fa08-49ca-adec-15441d9cbaa5","Type":"ContainerDied","Data":"54a5001214ecda3e593468fdbb340b605ef7cc442d4235039757517db3e0fea6"} Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.113942 4743 scope.go:117] "RemoveContainer" containerID="fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.118787 4743 generic.go:334] "Generic (PLEG): container finished" podID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerID="bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239" exitCode=0 Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.118840 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" event={"ID":"6e63d320-241c-4f1e-ace2-6b28a8d9d338","Type":"ContainerDied","Data":"bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239"} Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.118877 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" event={"ID":"6e63d320-241c-4f1e-ace2-6b28a8d9d338","Type":"ContainerDied","Data":"6e89f99e39ed436eb7fa026c32877aa90cdc44bb5e3960d8f0c191b502f775e7"} Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.118983 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.143696 4743 scope.go:117] "RemoveContainer" containerID="fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80" Nov 23 00:19:21 crc kubenswrapper[4743]: E1123 00:19:21.144610 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80\": container with ID starting with fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80 not found: ID does not exist" containerID="fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.144657 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80"} err="failed to get container status \"fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80\": rpc error: code = NotFound desc = could not find container \"fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80\": container with ID starting with fbed14f00806fd5448c52df875616463223fec8ec00d2397236577eb6d07ed80 not found: ID does not exist" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.144687 4743 scope.go:117] "RemoveContainer" containerID="bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.144789 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pqx6"] Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.153725 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pqx6"] Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.157274 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52"] Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.160284 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-52h52"] Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.165661 4743 scope.go:117] "RemoveContainer" containerID="bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239" Nov 23 00:19:21 crc kubenswrapper[4743]: E1123 00:19:21.166201 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239\": container with ID starting with bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239 not found: ID does not exist" containerID="bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.166287 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239"} err="failed to get container status \"bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239\": rpc error: code = NotFound desc = could not find container \"bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239\": container with ID starting with bc6425db48627d58c01705b1ff90ecca92c071248301e0f5866e0757da3b1239 not found: ID does not exist" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.315324 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs"] Nov 23 00:19:21 crc kubenswrapper[4743]: E1123 00:19:21.315636 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerName="route-controller-manager" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.315662 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerName="route-controller-manager" Nov 23 00:19:21 crc kubenswrapper[4743]: E1123 00:19:21.315697 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerName="controller-manager" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.315710 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerName="controller-manager" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.315873 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" containerName="route-controller-manager" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.315908 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" containerName="controller-manager" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.316454 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.318922 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.318931 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.321216 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.321439 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.321663 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.321869 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.336050 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs"] Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.372524 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-config\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.372611 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-client-ca\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.372660 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-serving-cert\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.372732 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9zs\" (UniqueName: \"kubernetes.io/projected/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-kube-api-access-rv9zs\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.473641 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9zs\" (UniqueName: \"kubernetes.io/projected/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-kube-api-access-rv9zs\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.473717 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-config\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.473760 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-client-ca\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.473817 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-serving-cert\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.474745 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-client-ca\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.474933 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-config\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.484878 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-serving-cert\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.504603 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9zs\" (UniqueName: \"kubernetes.io/projected/838531f5-cbc7-4bde-9fae-5fe5fac7d8b8-kube-api-access-rv9zs\") pod \"route-controller-manager-5c9f86db4f-bqnrs\" (UID: \"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8\") " pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.636779 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:21 crc kubenswrapper[4743]: I1123 00:19:21.923012 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs"] Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.009901 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d4776949-mg5df"] Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.010563 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.014765 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.015243 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.015637 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.015977 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.015980 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.016124 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.027855 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4776949-mg5df"] Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.035457 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.082435 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397e8031-67cb-443d-9b01-2ce0a1206a20-serving-cert\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.082565 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhg7\" (UniqueName: \"kubernetes.io/projected/397e8031-67cb-443d-9b01-2ce0a1206a20-kube-api-access-kvhg7\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.082603 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-client-ca\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.082681 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-config\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.082706 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-proxy-ca-bundles\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.126353 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" event={"ID":"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8","Type":"ContainerStarted","Data":"2458880e19a0e5424998f1d96882e05944a96e9a3b5822332e0b825beef33f6e"} Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.126400 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" event={"ID":"838531f5-cbc7-4bde-9fae-5fe5fac7d8b8","Type":"ContainerStarted","Data":"1fb36db65797b58288b460568734fa0979faa9212abb67f90b3d84904cebe647"} Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.126589 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.127681 4743 patch_prober.go:28] interesting pod/route-controller-manager-5c9f86db4f-bqnrs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.127714 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" podUID="838531f5-cbc7-4bde-9fae-5fe5fac7d8b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.144957 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" podStartSLOduration=1.144932128 podStartE2EDuration="1.144932128s" podCreationTimestamp="2025-11-23 00:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:19:22.14253572 +0000 UTC m=+754.220633857" watchObservedRunningTime="2025-11-23 00:19:22.144932128 +0000 UTC m=+754.223030245" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.184284 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397e8031-67cb-443d-9b01-2ce0a1206a20-serving-cert\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.184391 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvhg7\" (UniqueName: \"kubernetes.io/projected/397e8031-67cb-443d-9b01-2ce0a1206a20-kube-api-access-kvhg7\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.184424 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-client-ca\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.184603 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-config\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.184658 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-proxy-ca-bundles\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.185926 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-config\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.186436 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-proxy-ca-bundles\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.186553 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397e8031-67cb-443d-9b01-2ce0a1206a20-client-ca\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.188909 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397e8031-67cb-443d-9b01-2ce0a1206a20-serving-cert\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.202707 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvhg7\" (UniqueName: \"kubernetes.io/projected/397e8031-67cb-443d-9b01-2ce0a1206a20-kube-api-access-kvhg7\") pod \"controller-manager-d4776949-mg5df\" (UID: \"397e8031-67cb-443d-9b01-2ce0a1206a20\") " pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.362274 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.575353 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4776949-mg5df"] Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.730823 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e63d320-241c-4f1e-ace2-6b28a8d9d338" path="/var/lib/kubelet/pods/6e63d320-241c-4f1e-ace2-6b28a8d9d338/volumes" Nov 23 00:19:22 crc kubenswrapper[4743]: I1123 00:19:22.731829 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39201fe-fa08-49ca-adec-15441d9cbaa5" path="/var/lib/kubelet/pods/d39201fe-fa08-49ca-adec-15441d9cbaa5/volumes" Nov 23 00:19:23 crc kubenswrapper[4743]: I1123 00:19:23.146427 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" event={"ID":"397e8031-67cb-443d-9b01-2ce0a1206a20","Type":"ContainerStarted","Data":"be5f3cbcf9e7ef444f84462c7aa2a5e4dd3ef8f109239cbc83cf8cec354ea62d"} Nov 23 00:19:23 crc kubenswrapper[4743]: I1123 00:19:23.146951 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" event={"ID":"397e8031-67cb-443d-9b01-2ce0a1206a20","Type":"ContainerStarted","Data":"1143e12c46a1eb3fc820a676994bbb3ee9e88a012f23c261cc2ad63c07fe4305"} Nov 23 00:19:23 crc kubenswrapper[4743]: I1123 00:19:23.152247 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c9f86db4f-bqnrs" Nov 23 00:19:23 crc kubenswrapper[4743]: I1123 00:19:23.168540 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" podStartSLOduration=3.168517612 podStartE2EDuration="3.168517612s" podCreationTimestamp="2025-11-23 00:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:19:23.166031902 +0000 UTC m=+755.244130039" watchObservedRunningTime="2025-11-23 00:19:23.168517612 +0000 UTC m=+755.246615739" Nov 23 00:19:23 crc kubenswrapper[4743]: I1123 00:19:23.690356 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:19:23 crc kubenswrapper[4743]: I1123 00:19:23.690467 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:19:24 crc kubenswrapper[4743]: I1123 00:19:24.151662 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:24 crc kubenswrapper[4743]: I1123 00:19:24.159987 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d4776949-mg5df" Nov 23 00:19:29 crc kubenswrapper[4743]: I1123 00:19:29.665067 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlkrc"] Nov 23 00:19:29 crc kubenswrapper[4743]: I1123 00:19:29.665319 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qlkrc" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="registry-server" containerID="cri-o://fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7" gracePeriod=30 Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.183529 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.192581 4743 generic.go:334] "Generic (PLEG): container finished" podID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerID="fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7" exitCode=0 Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.192677 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlkrc" event={"ID":"ff76a6bc-06e4-4da8-828c-57a37fa57681","Type":"ContainerDied","Data":"fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7"} Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.192738 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlkrc" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.193077 4743 scope.go:117] "RemoveContainer" containerID="fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.193057 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlkrc" event={"ID":"ff76a6bc-06e4-4da8-828c-57a37fa57681","Type":"ContainerDied","Data":"06655b97f1687dda8ab171db45ffad00c8b499803d90b5b686db380f1992cbcd"} Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.216742 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmml\" (UniqueName: \"kubernetes.io/projected/ff76a6bc-06e4-4da8-828c-57a37fa57681-kube-api-access-zcmml\") pod \"ff76a6bc-06e4-4da8-828c-57a37fa57681\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.216877 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-utilities\") pod \"ff76a6bc-06e4-4da8-828c-57a37fa57681\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.216905 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-catalog-content\") pod \"ff76a6bc-06e4-4da8-828c-57a37fa57681\" (UID: \"ff76a6bc-06e4-4da8-828c-57a37fa57681\") " Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.218553 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-utilities" (OuterVolumeSpecName: "utilities") pod "ff76a6bc-06e4-4da8-828c-57a37fa57681" (UID: "ff76a6bc-06e4-4da8-828c-57a37fa57681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.227969 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff76a6bc-06e4-4da8-828c-57a37fa57681-kube-api-access-zcmml" (OuterVolumeSpecName: "kube-api-access-zcmml") pod "ff76a6bc-06e4-4da8-828c-57a37fa57681" (UID: "ff76a6bc-06e4-4da8-828c-57a37fa57681"). InnerVolumeSpecName "kube-api-access-zcmml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.230874 4743 scope.go:117] "RemoveContainer" containerID="207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.241329 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff76a6bc-06e4-4da8-828c-57a37fa57681" (UID: "ff76a6bc-06e4-4da8-828c-57a37fa57681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.257509 4743 scope.go:117] "RemoveContainer" containerID="b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.281792 4743 scope.go:117] "RemoveContainer" containerID="fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7" Nov 23 00:19:30 crc kubenswrapper[4743]: E1123 00:19:30.282264 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7\": container with ID starting with fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7 not found: ID does not exist" containerID="fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.282348 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7"} err="failed to get container status \"fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7\": rpc error: code = NotFound desc = could not find container \"fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7\": container with ID starting with fac9870cf65a0ab186603deacae429c599e2a53d01e1c976b7468c583e21f9e7 not found: ID does not exist" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.282393 4743 scope.go:117] "RemoveContainer" containerID="207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564" Nov 23 00:19:30 crc kubenswrapper[4743]: E1123 00:19:30.283105 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564\": container with ID starting with 207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564 not found: ID does not exist" containerID="207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.283185 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564"} err="failed to get container status \"207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564\": rpc error: code = NotFound desc = could not find container \"207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564\": container with ID starting with 207cf4eb50a1969aeca8b79778e70de550ff0559d4f6f7c606a7c03f88379564 not found: ID does not exist" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.283258 4743 scope.go:117] "RemoveContainer" containerID="b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8" Nov 23 00:19:30 crc kubenswrapper[4743]: E1123 00:19:30.284155 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8\": container with ID starting with b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8 not found: ID does not exist" containerID="b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.284201 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8"} err="failed to get container status \"b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8\": rpc error: code = NotFound desc = could not find container \"b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8\": container with ID starting with b980237875ad8f8f1849e4415769231893325ddc7cd427a7a04005ef193458c8 not found: ID does not exist" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.318959 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcmml\" (UniqueName: \"kubernetes.io/projected/ff76a6bc-06e4-4da8-828c-57a37fa57681-kube-api-access-zcmml\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.319002 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.319038 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff76a6bc-06e4-4da8-828c-57a37fa57681-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.392541 4743 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.523235 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlkrc"] Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.528043 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlkrc"] Nov 23 00:19:30 crc kubenswrapper[4743]: I1123 00:19:30.729472 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" path="/var/lib/kubelet/pods/ff76a6bc-06e4-4da8-828c-57a37fa57681/volumes" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.420847 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f"] Nov 23 00:19:33 crc kubenswrapper[4743]: E1123 00:19:33.421410 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="registry-server" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.421426 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="registry-server" Nov 23 00:19:33 crc kubenswrapper[4743]: E1123 00:19:33.421437 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="extract-content" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.421444 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="extract-content" Nov 23 00:19:33 crc kubenswrapper[4743]: E1123 00:19:33.421467 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="extract-utilities" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.421474 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="extract-utilities" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.421610 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff76a6bc-06e4-4da8-828c-57a37fa57681" containerName="registry-server" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.422341 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.424364 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.442872 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f"] Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.473023 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8m7\" (UniqueName: \"kubernetes.io/projected/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-kube-api-access-cq8m7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.473074 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.473137 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.574150 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.574183 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.574270 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8m7\" (UniqueName: \"kubernetes.io/projected/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-kube-api-access-cq8m7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.574808 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.574811 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.597385 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8m7\" (UniqueName: \"kubernetes.io/projected/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-kube-api-access-cq8m7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:33 crc kubenswrapper[4743]: I1123 00:19:33.738991 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:34 crc kubenswrapper[4743]: I1123 00:19:34.190283 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f"] Nov 23 00:19:34 crc kubenswrapper[4743]: W1123 00:19:34.206417 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46c6df3_6857_4e57_bfe3_e6dd3c3a3484.slice/crio-8ffdc5f98f1b13d43ae552c948434b6b1a2eed051a78d924b29210b2793b104a WatchSource:0}: Error finding container 8ffdc5f98f1b13d43ae552c948434b6b1a2eed051a78d924b29210b2793b104a: Status 404 returned error can't find the container with id 8ffdc5f98f1b13d43ae552c948434b6b1a2eed051a78d924b29210b2793b104a Nov 23 00:19:34 crc kubenswrapper[4743]: I1123 00:19:34.224125 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" event={"ID":"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484","Type":"ContainerStarted","Data":"8ffdc5f98f1b13d43ae552c948434b6b1a2eed051a78d924b29210b2793b104a"} Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.231324 4743 generic.go:334] "Generic (PLEG): container finished" podID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerID="1890c24812684859601e7b81098d7e4cede11671e504cd1b0e1d2bdc409bc9dd" exitCode=0 Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.231376 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" event={"ID":"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484","Type":"ContainerDied","Data":"1890c24812684859601e7b81098d7e4cede11671e504cd1b0e1d2bdc409bc9dd"} Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.234721 4743 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.774143 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twgjs"] Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.777056 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.797787 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twgjs"] Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.807695 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-utilities\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.807771 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqnm7\" (UniqueName: \"kubernetes.io/projected/8d010bb3-db87-45a5-92bf-2e2bc375534e-kube-api-access-sqnm7\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.807844 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-catalog-content\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.909253 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-utilities\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.909339 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqnm7\" (UniqueName: \"kubernetes.io/projected/8d010bb3-db87-45a5-92bf-2e2bc375534e-kube-api-access-sqnm7\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.909410 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-catalog-content\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.910220 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-catalog-content\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.910317 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-utilities\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:35 crc kubenswrapper[4743]: I1123 00:19:35.935634 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqnm7\" (UniqueName: \"kubernetes.io/projected/8d010bb3-db87-45a5-92bf-2e2bc375534e-kube-api-access-sqnm7\") pod \"redhat-operators-twgjs\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:36 crc kubenswrapper[4743]: I1123 00:19:36.106971 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:36 crc kubenswrapper[4743]: I1123 00:19:36.536319 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twgjs"] Nov 23 00:19:36 crc kubenswrapper[4743]: W1123 00:19:36.543424 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d010bb3_db87_45a5_92bf_2e2bc375534e.slice/crio-9506b6a069c0a030b3eb3d497c4e83bb249a020af57813ba84151546cf49cfb2 WatchSource:0}: Error finding container 9506b6a069c0a030b3eb3d497c4e83bb249a020af57813ba84151546cf49cfb2: Status 404 returned error can't find the container with id 9506b6a069c0a030b3eb3d497c4e83bb249a020af57813ba84151546cf49cfb2 Nov 23 00:19:37 crc kubenswrapper[4743]: I1123 00:19:37.247884 4743 generic.go:334] "Generic (PLEG): container finished" podID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerID="808ec3fe3a26dbc85c2bfdea048f2ae8aa5ad996a26601e2be1c5d5779934e2c" exitCode=0 Nov 23 00:19:37 crc kubenswrapper[4743]: I1123 00:19:37.247967 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" event={"ID":"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484","Type":"ContainerDied","Data":"808ec3fe3a26dbc85c2bfdea048f2ae8aa5ad996a26601e2be1c5d5779934e2c"} Nov 23 00:19:37 crc kubenswrapper[4743]: I1123 00:19:37.251549 4743 generic.go:334] "Generic (PLEG): container finished" podID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerID="b1cf5547ddb2843a61eb94b8fea42e0f36d69b8bf962790e16f6e139954ff694" exitCode=0 Nov 23 00:19:37 crc kubenswrapper[4743]: I1123 00:19:37.251609 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerDied","Data":"b1cf5547ddb2843a61eb94b8fea42e0f36d69b8bf962790e16f6e139954ff694"} Nov 23 00:19:37 crc kubenswrapper[4743]: I1123 00:19:37.251635 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerStarted","Data":"9506b6a069c0a030b3eb3d497c4e83bb249a020af57813ba84151546cf49cfb2"} Nov 23 00:19:38 crc kubenswrapper[4743]: I1123 00:19:38.263532 4743 generic.go:334] "Generic (PLEG): container finished" podID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerID="80f2dfe8c87db7f12bb0ab76c1504da839550707d4e9c3f89e6f27fdd83c2b0d" exitCode=0 Nov 23 00:19:38 crc kubenswrapper[4743]: I1123 00:19:38.264800 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" event={"ID":"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484","Type":"ContainerDied","Data":"80f2dfe8c87db7f12bb0ab76c1504da839550707d4e9c3f89e6f27fdd83c2b0d"} Nov 23 00:19:38 crc kubenswrapper[4743]: I1123 00:19:38.282054 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerStarted","Data":"3ed5d8902735fcda1ab5fa8dab8d9c0e0f21c59620ce71cd3e9a84980e0e3c66"} Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.291465 4743 generic.go:334] "Generic (PLEG): container finished" podID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerID="3ed5d8902735fcda1ab5fa8dab8d9c0e0f21c59620ce71cd3e9a84980e0e3c66" exitCode=0 Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.291553 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerDied","Data":"3ed5d8902735fcda1ab5fa8dab8d9c0e0f21c59620ce71cd3e9a84980e0e3c66"} Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.670058 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.761200 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-bundle\") pod \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.761334 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq8m7\" (UniqueName: \"kubernetes.io/projected/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-kube-api-access-cq8m7\") pod \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.761576 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-util\") pod \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\" (UID: \"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484\") " Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.766632 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-bundle" (OuterVolumeSpecName: "bundle") pod "a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" (UID: "a46c6df3-6857-4e57-bfe3-e6dd3c3a3484"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.772642 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-kube-api-access-cq8m7" (OuterVolumeSpecName: "kube-api-access-cq8m7") pod "a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" (UID: "a46c6df3-6857-4e57-bfe3-e6dd3c3a3484"). InnerVolumeSpecName "kube-api-access-cq8m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.864192 4743 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:39 crc kubenswrapper[4743]: I1123 00:19:39.864269 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq8m7\" (UniqueName: \"kubernetes.io/projected/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-kube-api-access-cq8m7\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.141418 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-util" (OuterVolumeSpecName: "util") pod "a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" (UID: "a46c6df3-6857-4e57-bfe3-e6dd3c3a3484"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.169590 4743 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46c6df3-6857-4e57-bfe3-e6dd3c3a3484-util\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.303009 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerStarted","Data":"c0880847d0fa2260bb0b3e7f0973b86e9cd6f4ed6d471973594cd7a4bd703997"} Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.305685 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" event={"ID":"a46c6df3-6857-4e57-bfe3-e6dd3c3a3484","Type":"ContainerDied","Data":"8ffdc5f98f1b13d43ae552c948434b6b1a2eed051a78d924b29210b2793b104a"} Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.305860 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ffdc5f98f1b13d43ae552c948434b6b1a2eed051a78d924b29210b2793b104a" Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.305738 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f" Nov 23 00:19:40 crc kubenswrapper[4743]: I1123 00:19:40.330218 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twgjs" podStartSLOduration=2.806898375 podStartE2EDuration="5.330187495s" podCreationTimestamp="2025-11-23 00:19:35 +0000 UTC" firstStartedPulling="2025-11-23 00:19:37.253854417 +0000 UTC m=+769.331952584" lastFinishedPulling="2025-11-23 00:19:39.777143537 +0000 UTC m=+771.855241704" observedRunningTime="2025-11-23 00:19:40.327598132 +0000 UTC m=+772.405696279" watchObservedRunningTime="2025-11-23 00:19:40.330187495 +0000 UTC m=+772.408285632" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.416166 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr"] Nov 23 00:19:43 crc kubenswrapper[4743]: E1123 00:19:43.416373 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="util" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.416384 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="util" Nov 23 00:19:43 crc kubenswrapper[4743]: E1123 00:19:43.416399 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="pull" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.416404 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="pull" Nov 23 00:19:43 crc kubenswrapper[4743]: E1123 00:19:43.416420 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="extract" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.416426 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="extract" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.416537 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46c6df3-6857-4e57-bfe3-e6dd3c3a3484" containerName="extract" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.417229 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.421255 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.433221 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr"] Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.535468 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.535719 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjcr8\" (UniqueName: \"kubernetes.io/projected/2f527e47-438e-4dbe-81e3-2528e5a46677-kube-api-access-cjcr8\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.535855 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.636665 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.636760 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.636815 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjcr8\" (UniqueName: \"kubernetes.io/projected/2f527e47-438e-4dbe-81e3-2528e5a46677-kube-api-access-cjcr8\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.637232 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.637450 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.660303 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjcr8\" (UniqueName: \"kubernetes.io/projected/2f527e47-438e-4dbe-81e3-2528e5a46677-kube-api-access-cjcr8\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:43 crc kubenswrapper[4743]: I1123 00:19:43.734593 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.188674 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr"] Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.331688 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" event={"ID":"2f527e47-438e-4dbe-81e3-2528e5a46677","Type":"ContainerStarted","Data":"710ef9a6f8902889a4a407463ae3dcb90cde6e0431ce37dc8e3db9a7f07ffcde"} Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.407888 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns"] Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.408821 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.430191 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns"] Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.445465 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcds8\" (UniqueName: \"kubernetes.io/projected/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-kube-api-access-qcds8\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.445549 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.445681 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.546658 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcds8\" (UniqueName: \"kubernetes.io/projected/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-kube-api-access-qcds8\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.546746 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.546795 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.547308 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.547324 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.568235 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcds8\" (UniqueName: \"kubernetes.io/projected/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-kube-api-access-qcds8\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:44 crc kubenswrapper[4743]: I1123 00:19:44.723932 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:45 crc kubenswrapper[4743]: I1123 00:19:45.247810 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns"] Nov 23 00:19:45 crc kubenswrapper[4743]: I1123 00:19:45.337599 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" event={"ID":"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d","Type":"ContainerStarted","Data":"45d986ac4ef4127e943d51f622bd881956f4f7a0ca3aaa76f3bdd887940624f5"} Nov 23 00:19:45 crc kubenswrapper[4743]: I1123 00:19:45.339572 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" event={"ID":"2f527e47-438e-4dbe-81e3-2528e5a46677","Type":"ContainerStarted","Data":"1f3301fcac4e9419859946b5b271029c859327254a3d74316364f96d75acb396"} Nov 23 00:19:46 crc kubenswrapper[4743]: I1123 00:19:46.107579 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:46 crc kubenswrapper[4743]: I1123 00:19:46.107654 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:47 crc kubenswrapper[4743]: I1123 00:19:47.173430 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-twgjs" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="registry-server" probeResult="failure" output=< Nov 23 00:19:47 crc kubenswrapper[4743]: timeout: failed to connect service ":50051" within 1s Nov 23 00:19:47 crc kubenswrapper[4743]: > Nov 23 00:19:47 crc kubenswrapper[4743]: I1123 00:19:47.359974 4743 generic.go:334] "Generic (PLEG): container finished" podID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerID="1f3301fcac4e9419859946b5b271029c859327254a3d74316364f96d75acb396" exitCode=0 Nov 23 00:19:47 crc kubenswrapper[4743]: I1123 00:19:47.360078 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" event={"ID":"2f527e47-438e-4dbe-81e3-2528e5a46677","Type":"ContainerDied","Data":"1f3301fcac4e9419859946b5b271029c859327254a3d74316364f96d75acb396"} Nov 23 00:19:47 crc kubenswrapper[4743]: I1123 00:19:47.361778 4743 generic.go:334] "Generic (PLEG): container finished" podID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerID="922619e5fd5cd7a486424906dd8eec32b414cb5f1900c95d3038479cf32a4501" exitCode=0 Nov 23 00:19:47 crc kubenswrapper[4743]: I1123 00:19:47.361844 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" event={"ID":"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d","Type":"ContainerDied","Data":"922619e5fd5cd7a486424906dd8eec32b414cb5f1900c95d3038479cf32a4501"} Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.368379 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kxrz8"] Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.369641 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.422049 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxrz8"] Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.509061 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjfc8\" (UniqueName: \"kubernetes.io/projected/7e51d2b8-42ec-4508-8092-02949efcc06d-kube-api-access-wjfc8\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.509392 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-utilities\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.509564 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-catalog-content\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.611112 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjfc8\" (UniqueName: \"kubernetes.io/projected/7e51d2b8-42ec-4508-8092-02949efcc06d-kube-api-access-wjfc8\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.611633 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-utilities\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.611681 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-catalog-content\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.612136 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-utilities\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.612165 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-catalog-content\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.633356 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjfc8\" (UniqueName: \"kubernetes.io/projected/7e51d2b8-42ec-4508-8092-02949efcc06d-kube-api-access-wjfc8\") pod \"certified-operators-kxrz8\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:48 crc kubenswrapper[4743]: I1123 00:19:48.685030 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:49 crc kubenswrapper[4743]: I1123 00:19:49.293942 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxrz8"] Nov 23 00:19:49 crc kubenswrapper[4743]: W1123 00:19:49.302303 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e51d2b8_42ec_4508_8092_02949efcc06d.slice/crio-56c061bbf0396a04c617559675355761cfb30145e45f7a62c5ec708e7054675d WatchSource:0}: Error finding container 56c061bbf0396a04c617559675355761cfb30145e45f7a62c5ec708e7054675d: Status 404 returned error can't find the container with id 56c061bbf0396a04c617559675355761cfb30145e45f7a62c5ec708e7054675d Nov 23 00:19:49 crc kubenswrapper[4743]: I1123 00:19:49.373734 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerStarted","Data":"56c061bbf0396a04c617559675355761cfb30145e45f7a62c5ec708e7054675d"} Nov 23 00:19:49 crc kubenswrapper[4743]: I1123 00:19:49.376853 4743 generic.go:334] "Generic (PLEG): container finished" podID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerID="21d80d1ed84608ac2990a48c7fa9197a20401071786155e215b532956fedeae2" exitCode=0 Nov 23 00:19:49 crc kubenswrapper[4743]: I1123 00:19:49.376912 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" event={"ID":"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d","Type":"ContainerDied","Data":"21d80d1ed84608ac2990a48c7fa9197a20401071786155e215b532956fedeae2"} Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.214002 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh"] Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.215381 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.228551 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh"] Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.345457 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.345840 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8vx\" (UniqueName: \"kubernetes.io/projected/7df6008c-cc2b-4422-a7d4-c02b91c052a6-kube-api-access-kk8vx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.345930 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.394885 4743 generic.go:334] "Generic (PLEG): container finished" podID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerID="ab6a6c33a4f965e42ee15d931ca53daf208f7a6024dba445bde7e2489cf9df74" exitCode=0 Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.394966 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerDied","Data":"ab6a6c33a4f965e42ee15d931ca53daf208f7a6024dba445bde7e2489cf9df74"} Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.400871 4743 generic.go:334] "Generic (PLEG): container finished" podID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerID="a01c963f6fd68cf0ab2e0c1970e00cae2668ea806b087acc85012485ac40cda3" exitCode=0 Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.400924 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" event={"ID":"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d","Type":"ContainerDied","Data":"a01c963f6fd68cf0ab2e0c1970e00cae2668ea806b087acc85012485ac40cda3"} Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.402619 4743 generic.go:334] "Generic (PLEG): container finished" podID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerID="a2970d1a2c31df557da128039eb3b9bdb03f5efd7ba515818f2ab43245c39ce9" exitCode=0 Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.402640 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" event={"ID":"2f527e47-438e-4dbe-81e3-2528e5a46677","Type":"ContainerDied","Data":"a2970d1a2c31df557da128039eb3b9bdb03f5efd7ba515818f2ab43245c39ce9"} Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.448252 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.448317 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk8vx\" (UniqueName: \"kubernetes.io/projected/7df6008c-cc2b-4422-a7d4-c02b91c052a6-kube-api-access-kk8vx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.448347 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.448927 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.449018 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.478699 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk8vx\" (UniqueName: \"kubernetes.io/projected/7df6008c-cc2b-4422-a7d4-c02b91c052a6-kube-api-access-kk8vx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:50 crc kubenswrapper[4743]: I1123 00:19:50.531718 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.075516 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh"] Nov 23 00:19:51 crc kubenswrapper[4743]: W1123 00:19:51.090201 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df6008c_cc2b_4422_a7d4_c02b91c052a6.slice/crio-9268fd0007f35927a9e11a4dac3e4344c17532be57fd5557a1388dc56f9f80e2 WatchSource:0}: Error finding container 9268fd0007f35927a9e11a4dac3e4344c17532be57fd5557a1388dc56f9f80e2: Status 404 returned error can't find the container with id 9268fd0007f35927a9e11a4dac3e4344c17532be57fd5557a1388dc56f9f80e2 Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.408873 4743 generic.go:334] "Generic (PLEG): container finished" podID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerID="6e90df03857c3dac9ea12c3737a72cab19bbdbf6e7ca4055c5b9a8f46fe7b9df" exitCode=0 Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.408933 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" event={"ID":"7df6008c-cc2b-4422-a7d4-c02b91c052a6","Type":"ContainerDied","Data":"6e90df03857c3dac9ea12c3737a72cab19bbdbf6e7ca4055c5b9a8f46fe7b9df"} Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.408961 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" event={"ID":"7df6008c-cc2b-4422-a7d4-c02b91c052a6","Type":"ContainerStarted","Data":"9268fd0007f35927a9e11a4dac3e4344c17532be57fd5557a1388dc56f9f80e2"} Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.411842 4743 generic.go:334] "Generic (PLEG): container finished" podID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerID="6a9d05b33b83eb86e91e9957d081c1b57b5a06b0866a08bc45bddf97b3f44255" exitCode=0 Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.411877 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" event={"ID":"2f527e47-438e-4dbe-81e3-2528e5a46677","Type":"ContainerDied","Data":"6a9d05b33b83eb86e91e9957d081c1b57b5a06b0866a08bc45bddf97b3f44255"} Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.414637 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerStarted","Data":"26201969135e6d76b00bea74816075b7e974c7a5d4a4a51dd1048f9e9845afed"} Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.764741 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.873286 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-util\") pod \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.873355 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcds8\" (UniqueName: \"kubernetes.io/projected/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-kube-api-access-qcds8\") pod \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.873375 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-bundle\") pod \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\" (UID: \"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d\") " Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.874169 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-bundle" (OuterVolumeSpecName: "bundle") pod "cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" (UID: "cfd3bf60-7bfe-4a47-940d-a1e8b864d77d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.900111 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-kube-api-access-qcds8" (OuterVolumeSpecName: "kube-api-access-qcds8") pod "cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" (UID: "cfd3bf60-7bfe-4a47-940d-a1e8b864d77d"). InnerVolumeSpecName "kube-api-access-qcds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.918818 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-util" (OuterVolumeSpecName: "util") pod "cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" (UID: "cfd3bf60-7bfe-4a47-940d-a1e8b864d77d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.975119 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcds8\" (UniqueName: \"kubernetes.io/projected/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-kube-api-access-qcds8\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.975161 4743 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:51 crc kubenswrapper[4743]: I1123 00:19:51.975171 4743 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfd3bf60-7bfe-4a47-940d-a1e8b864d77d-util\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.421234 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" event={"ID":"cfd3bf60-7bfe-4a47-940d-a1e8b864d77d","Type":"ContainerDied","Data":"45d986ac4ef4127e943d51f622bd881956f4f7a0ca3aaa76f3bdd887940624f5"} Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.421275 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d986ac4ef4127e943d51f622bd881956f4f7a0ca3aaa76f3bdd887940624f5" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.421278 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.427639 4743 generic.go:334] "Generic (PLEG): container finished" podID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerID="26201969135e6d76b00bea74816075b7e974c7a5d4a4a51dd1048f9e9845afed" exitCode=0 Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.427760 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerDied","Data":"26201969135e6d76b00bea74816075b7e974c7a5d4a4a51dd1048f9e9845afed"} Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.796056 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.888409 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjcr8\" (UniqueName: \"kubernetes.io/projected/2f527e47-438e-4dbe-81e3-2528e5a46677-kube-api-access-cjcr8\") pod \"2f527e47-438e-4dbe-81e3-2528e5a46677\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.888516 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-bundle\") pod \"2f527e47-438e-4dbe-81e3-2528e5a46677\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.888607 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-util\") pod \"2f527e47-438e-4dbe-81e3-2528e5a46677\" (UID: \"2f527e47-438e-4dbe-81e3-2528e5a46677\") " Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.889468 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-bundle" (OuterVolumeSpecName: "bundle") pod "2f527e47-438e-4dbe-81e3-2528e5a46677" (UID: "2f527e47-438e-4dbe-81e3-2528e5a46677"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.892013 4743 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.901006 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f527e47-438e-4dbe-81e3-2528e5a46677-kube-api-access-cjcr8" (OuterVolumeSpecName: "kube-api-access-cjcr8") pod "2f527e47-438e-4dbe-81e3-2528e5a46677" (UID: "2f527e47-438e-4dbe-81e3-2528e5a46677"). InnerVolumeSpecName "kube-api-access-cjcr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.901633 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-util" (OuterVolumeSpecName: "util") pod "2f527e47-438e-4dbe-81e3-2528e5a46677" (UID: "2f527e47-438e-4dbe-81e3-2528e5a46677"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.958107 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f"] Nov 23 00:19:52 crc kubenswrapper[4743]: E1123 00:19:52.958547 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="pull" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.958622 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="pull" Nov 23 00:19:52 crc kubenswrapper[4743]: E1123 00:19:52.958681 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="pull" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.958729 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="pull" Nov 23 00:19:52 crc kubenswrapper[4743]: E1123 00:19:52.958785 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="extract" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.958834 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="extract" Nov 23 00:19:52 crc kubenswrapper[4743]: E1123 00:19:52.958897 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="extract" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.958945 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="extract" Nov 23 00:19:52 crc kubenswrapper[4743]: E1123 00:19:52.958994 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="util" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.959040 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="util" Nov 23 00:19:52 crc kubenswrapper[4743]: E1123 00:19:52.959087 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="util" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.959132 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="util" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.959264 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f527e47-438e-4dbe-81e3-2528e5a46677" containerName="extract" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.959334 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd3bf60-7bfe-4a47-940d-a1e8b864d77d" containerName="extract" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.959797 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.961392 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-5cp4p" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.961978 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.964276 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.973841 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f"] Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.997176 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjcr8\" (UniqueName: \"kubernetes.io/projected/2f527e47-438e-4dbe-81e3-2528e5a46677-kube-api-access-cjcr8\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:52 crc kubenswrapper[4743]: I1123 00:19:52.997207 4743 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f527e47-438e-4dbe-81e3-2528e5a46677-util\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.092787 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.093576 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.097058 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.098106 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsqsq\" (UniqueName: \"kubernetes.io/projected/ba332c3a-1550-4dba-856c-13ec50c7f04a-kube-api-access-jsqsq\") pod \"obo-prometheus-operator-668cf9dfbb-dkd8f\" (UID: \"ba332c3a-1550-4dba-856c-13ec50c7f04a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.099766 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-vhpvk" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.103005 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.109586 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.110289 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.153463 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.203149 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df083e33-3f8a-4094-8e61-3ce2fd8cea48-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq\" (UID: \"df083e33-3f8a-4094-8e61-3ce2fd8cea48\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.203223 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsqsq\" (UniqueName: \"kubernetes.io/projected/ba332c3a-1550-4dba-856c-13ec50c7f04a-kube-api-access-jsqsq\") pod \"obo-prometheus-operator-668cf9dfbb-dkd8f\" (UID: \"ba332c3a-1550-4dba-856c-13ec50c7f04a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.203281 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df083e33-3f8a-4094-8e61-3ce2fd8cea48-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq\" (UID: \"df083e33-3f8a-4094-8e61-3ce2fd8cea48\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.226317 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsqsq\" (UniqueName: \"kubernetes.io/projected/ba332c3a-1550-4dba-856c-13ec50c7f04a-kube-api-access-jsqsq\") pod \"obo-prometheus-operator-668cf9dfbb-dkd8f\" (UID: \"ba332c3a-1550-4dba-856c-13ec50c7f04a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.279573 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.304625 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df083e33-3f8a-4094-8e61-3ce2fd8cea48-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq\" (UID: \"df083e33-3f8a-4094-8e61-3ce2fd8cea48\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.304733 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8b59a3a-47f6-4efb-8851-d64094821b88-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb\" (UID: \"a8b59a3a-47f6-4efb-8851-d64094821b88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.304767 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df083e33-3f8a-4094-8e61-3ce2fd8cea48-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq\" (UID: \"df083e33-3f8a-4094-8e61-3ce2fd8cea48\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.304859 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8b59a3a-47f6-4efb-8851-d64094821b88-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb\" (UID: \"a8b59a3a-47f6-4efb-8851-d64094821b88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.308014 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df083e33-3f8a-4094-8e61-3ce2fd8cea48-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq\" (UID: \"df083e33-3f8a-4094-8e61-3ce2fd8cea48\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.311365 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-x8gdr"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.312277 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.315019 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.315221 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df083e33-3f8a-4094-8e61-3ce2fd8cea48-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq\" (UID: \"df083e33-3f8a-4094-8e61-3ce2fd8cea48\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.315257 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-bqnwl" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.344309 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-x8gdr"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.406559 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8b59a3a-47f6-4efb-8851-d64094821b88-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb\" (UID: \"a8b59a3a-47f6-4efb-8851-d64094821b88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.406620 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8b59a3a-47f6-4efb-8851-d64094821b88-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb\" (UID: \"a8b59a3a-47f6-4efb-8851-d64094821b88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.409859 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8b59a3a-47f6-4efb-8851-d64094821b88-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb\" (UID: \"a8b59a3a-47f6-4efb-8851-d64094821b88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.410930 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8b59a3a-47f6-4efb-8851-d64094821b88-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb\" (UID: \"a8b59a3a-47f6-4efb-8851-d64094821b88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.411743 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.434454 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerStarted","Data":"3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5"} Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.437119 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" event={"ID":"2f527e47-438e-4dbe-81e3-2528e5a46677","Type":"ContainerDied","Data":"710ef9a6f8902889a4a407463ae3dcb90cde6e0431ce37dc8e3db9a7f07ffcde"} Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.437141 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710ef9a6f8902889a4a407463ae3dcb90cde6e0431ce37dc8e3db9a7f07ffcde" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.437202 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.447578 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.462630 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kxrz8" podStartSLOduration=3.005153843 podStartE2EDuration="5.462607127s" podCreationTimestamp="2025-11-23 00:19:48 +0000 UTC" firstStartedPulling="2025-11-23 00:19:50.39943346 +0000 UTC m=+782.477531587" lastFinishedPulling="2025-11-23 00:19:52.856886744 +0000 UTC m=+784.934984871" observedRunningTime="2025-11-23 00:19:53.458807474 +0000 UTC m=+785.536905601" watchObservedRunningTime="2025-11-23 00:19:53.462607127 +0000 UTC m=+785.540705254" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.506626 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-28x4p"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.508163 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e0aa0b-787b-4283-9e2c-3bdad984d8c0-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-x8gdr\" (UID: \"69e0aa0b-787b-4283-9e2c-3bdad984d8c0\") " pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.514903 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.515694 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdnz\" (UniqueName: \"kubernetes.io/projected/69e0aa0b-787b-4283-9e2c-3bdad984d8c0-kube-api-access-5fdnz\") pod \"observability-operator-d8bb48f5d-x8gdr\" (UID: \"69e0aa0b-787b-4283-9e2c-3bdad984d8c0\") " pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.519892 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cgmm6" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.531894 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-28x4p"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.616893 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e0aa0b-787b-4283-9e2c-3bdad984d8c0-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-x8gdr\" (UID: \"69e0aa0b-787b-4283-9e2c-3bdad984d8c0\") " pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.616989 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6b2s\" (UniqueName: \"kubernetes.io/projected/b01c1d4f-e027-408d-9d5f-c7c7006aa50f-kube-api-access-x6b2s\") pod \"perses-operator-5446b9c989-28x4p\" (UID: \"b01c1d4f-e027-408d-9d5f-c7c7006aa50f\") " pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.617041 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b01c1d4f-e027-408d-9d5f-c7c7006aa50f-openshift-service-ca\") pod \"perses-operator-5446b9c989-28x4p\" (UID: \"b01c1d4f-e027-408d-9d5f-c7c7006aa50f\") " pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.617078 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdnz\" (UniqueName: \"kubernetes.io/projected/69e0aa0b-787b-4283-9e2c-3bdad984d8c0-kube-api-access-5fdnz\") pod \"observability-operator-d8bb48f5d-x8gdr\" (UID: \"69e0aa0b-787b-4283-9e2c-3bdad984d8c0\") " pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.620916 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e0aa0b-787b-4283-9e2c-3bdad984d8c0-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-x8gdr\" (UID: \"69e0aa0b-787b-4283-9e2c-3bdad984d8c0\") " pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.642998 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdnz\" (UniqueName: \"kubernetes.io/projected/69e0aa0b-787b-4283-9e2c-3bdad984d8c0-kube-api-access-5fdnz\") pod \"observability-operator-d8bb48f5d-x8gdr\" (UID: \"69e0aa0b-787b-4283-9e2c-3bdad984d8c0\") " pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.691015 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.691067 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.718350 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6b2s\" (UniqueName: \"kubernetes.io/projected/b01c1d4f-e027-408d-9d5f-c7c7006aa50f-kube-api-access-x6b2s\") pod \"perses-operator-5446b9c989-28x4p\" (UID: \"b01c1d4f-e027-408d-9d5f-c7c7006aa50f\") " pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.718421 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b01c1d4f-e027-408d-9d5f-c7c7006aa50f-openshift-service-ca\") pod \"perses-operator-5446b9c989-28x4p\" (UID: \"b01c1d4f-e027-408d-9d5f-c7c7006aa50f\") " pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.719440 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b01c1d4f-e027-408d-9d5f-c7c7006aa50f-openshift-service-ca\") pod \"perses-operator-5446b9c989-28x4p\" (UID: \"b01c1d4f-e027-408d-9d5f-c7c7006aa50f\") " pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.734070 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6b2s\" (UniqueName: \"kubernetes.io/projected/b01c1d4f-e027-408d-9d5f-c7c7006aa50f-kube-api-access-x6b2s\") pod \"perses-operator-5446b9c989-28x4p\" (UID: \"b01c1d4f-e027-408d-9d5f-c7c7006aa50f\") " pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.778772 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.845776 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.870754 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f"] Nov 23 00:19:53 crc kubenswrapper[4743]: I1123 00:19:53.933288 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:19:54 crc kubenswrapper[4743]: I1123 00:19:54.042334 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb"] Nov 23 00:19:54 crc kubenswrapper[4743]: W1123 00:19:54.077075 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b59a3a_47f6_4efb_8851_d64094821b88.slice/crio-07e1a8f721e40bb103d72e51835167be8264a0a29735e146286eed3f31b41138 WatchSource:0}: Error finding container 07e1a8f721e40bb103d72e51835167be8264a0a29735e146286eed3f31b41138: Status 404 returned error can't find the container with id 07e1a8f721e40bb103d72e51835167be8264a0a29735e146286eed3f31b41138 Nov 23 00:19:54 crc kubenswrapper[4743]: I1123 00:19:54.350184 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-28x4p"] Nov 23 00:19:54 crc kubenswrapper[4743]: I1123 00:19:54.447581 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" event={"ID":"df083e33-3f8a-4094-8e61-3ce2fd8cea48","Type":"ContainerStarted","Data":"1d1f2b3884a5a557664f65ec157e62b1507f0d623e3d03cd2322c426f3e08e60"} Nov 23 00:19:54 crc kubenswrapper[4743]: I1123 00:19:54.449069 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" event={"ID":"a8b59a3a-47f6-4efb-8851-d64094821b88","Type":"ContainerStarted","Data":"07e1a8f721e40bb103d72e51835167be8264a0a29735e146286eed3f31b41138"} Nov 23 00:19:54 crc kubenswrapper[4743]: I1123 00:19:54.450182 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" event={"ID":"ba332c3a-1550-4dba-856c-13ec50c7f04a","Type":"ContainerStarted","Data":"77b77c66cf141b1078cdfe9b4d20d189928041c79d03e364e32f9c9b9247517f"} Nov 23 00:19:54 crc kubenswrapper[4743]: I1123 00:19:54.484340 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-x8gdr"] Nov 23 00:19:56 crc kubenswrapper[4743]: I1123 00:19:56.176650 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:56 crc kubenswrapper[4743]: I1123 00:19:56.247244 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:56 crc kubenswrapper[4743]: W1123 00:19:56.351625 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb01c1d4f_e027_408d_9d5f_c7c7006aa50f.slice/crio-be1b5f3e1b3323c40f6019348ee48501c184f0a50e3d3fbd219cd548ee65c62c WatchSource:0}: Error finding container be1b5f3e1b3323c40f6019348ee48501c184f0a50e3d3fbd219cd548ee65c62c: Status 404 returned error can't find the container with id be1b5f3e1b3323c40f6019348ee48501c184f0a50e3d3fbd219cd548ee65c62c Nov 23 00:19:56 crc kubenswrapper[4743]: I1123 00:19:56.464430 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" event={"ID":"69e0aa0b-787b-4283-9e2c-3bdad984d8c0","Type":"ContainerStarted","Data":"6a148719b158908cd58bd2a7fa3299bc240b0716a53e9fc44fdfd1c909434f50"} Nov 23 00:19:56 crc kubenswrapper[4743]: I1123 00:19:56.465468 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-28x4p" event={"ID":"b01c1d4f-e027-408d-9d5f-c7c7006aa50f","Type":"ContainerStarted","Data":"be1b5f3e1b3323c40f6019348ee48501c184f0a50e3d3fbd219cd548ee65c62c"} Nov 23 00:19:57 crc kubenswrapper[4743]: I1123 00:19:57.474632 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" event={"ID":"7df6008c-cc2b-4422-a7d4-c02b91c052a6","Type":"ContainerStarted","Data":"0506706e68e88df8f00e50504152d6a05ab2e47da7e1922517108a2f13a1e108"} Nov 23 00:19:57 crc kubenswrapper[4743]: I1123 00:19:57.967648 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twgjs"] Nov 23 00:19:57 crc kubenswrapper[4743]: I1123 00:19:57.967872 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twgjs" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="registry-server" containerID="cri-o://c0880847d0fa2260bb0b3e7f0973b86e9cd6f4ed6d471973594cd7a4bd703997" gracePeriod=2 Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.492996 4743 generic.go:334] "Generic (PLEG): container finished" podID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerID="c0880847d0fa2260bb0b3e7f0973b86e9cd6f4ed6d471973594cd7a4bd703997" exitCode=0 Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.493541 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerDied","Data":"c0880847d0fa2260bb0b3e7f0973b86e9cd6f4ed6d471973594cd7a4bd703997"} Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.495157 4743 generic.go:334] "Generic (PLEG): container finished" podID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerID="0506706e68e88df8f00e50504152d6a05ab2e47da7e1922517108a2f13a1e108" exitCode=0 Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.495186 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" event={"ID":"7df6008c-cc2b-4422-a7d4-c02b91c052a6","Type":"ContainerDied","Data":"0506706e68e88df8f00e50504152d6a05ab2e47da7e1922517108a2f13a1e108"} Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.662880 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.685353 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.685893 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.726409 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-utilities\") pod \"8d010bb3-db87-45a5-92bf-2e2bc375534e\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.726532 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqnm7\" (UniqueName: \"kubernetes.io/projected/8d010bb3-db87-45a5-92bf-2e2bc375534e-kube-api-access-sqnm7\") pod \"8d010bb3-db87-45a5-92bf-2e2bc375534e\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.726611 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-catalog-content\") pod \"8d010bb3-db87-45a5-92bf-2e2bc375534e\" (UID: \"8d010bb3-db87-45a5-92bf-2e2bc375534e\") " Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.728934 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-utilities" (OuterVolumeSpecName: "utilities") pod "8d010bb3-db87-45a5-92bf-2e2bc375534e" (UID: "8d010bb3-db87-45a5-92bf-2e2bc375534e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.765175 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d010bb3-db87-45a5-92bf-2e2bc375534e-kube-api-access-sqnm7" (OuterVolumeSpecName: "kube-api-access-sqnm7") pod "8d010bb3-db87-45a5-92bf-2e2bc375534e" (UID: "8d010bb3-db87-45a5-92bf-2e2bc375534e"). InnerVolumeSpecName "kube-api-access-sqnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.781164 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.829446 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.829497 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqnm7\" (UniqueName: \"kubernetes.io/projected/8d010bb3-db87-45a5-92bf-2e2bc375534e-kube-api-access-sqnm7\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.834684 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-5c96b8969b-j99fz"] Nov 23 00:19:58 crc kubenswrapper[4743]: E1123 00:19:58.835118 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="extract-utilities" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.835145 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="extract-utilities" Nov 23 00:19:58 crc kubenswrapper[4743]: E1123 00:19:58.835162 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="registry-server" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.835194 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="registry-server" Nov 23 00:19:58 crc kubenswrapper[4743]: E1123 00:19:58.835209 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="extract-content" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.835216 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="extract-content" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.835334 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" containerName="registry-server" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.835926 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.838974 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.839075 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.839417 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.839577 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-lbfr8" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.856025 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-5c96b8969b-j99fz"] Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.928285 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d010bb3-db87-45a5-92bf-2e2bc375534e" (UID: "8d010bb3-db87-45a5-92bf-2e2bc375534e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.930283 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzqs\" (UniqueName: \"kubernetes.io/projected/5e4c2e9b-6668-483f-8f27-6271ee8c3250-kube-api-access-wjzqs\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.930363 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e4c2e9b-6668-483f-8f27-6271ee8c3250-apiservice-cert\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.930403 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e4c2e9b-6668-483f-8f27-6271ee8c3250-webhook-cert\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:58 crc kubenswrapper[4743]: I1123 00:19:58.930444 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d010bb3-db87-45a5-92bf-2e2bc375534e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.031457 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzqs\" (UniqueName: \"kubernetes.io/projected/5e4c2e9b-6668-483f-8f27-6271ee8c3250-kube-api-access-wjzqs\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.031625 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e4c2e9b-6668-483f-8f27-6271ee8c3250-apiservice-cert\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.031734 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e4c2e9b-6668-483f-8f27-6271ee8c3250-webhook-cert\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.039913 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e4c2e9b-6668-483f-8f27-6271ee8c3250-apiservice-cert\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.043678 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e4c2e9b-6668-483f-8f27-6271ee8c3250-webhook-cert\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.049110 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzqs\" (UniqueName: \"kubernetes.io/projected/5e4c2e9b-6668-483f-8f27-6271ee8c3250-kube-api-access-wjzqs\") pod \"elastic-operator-5c96b8969b-j99fz\" (UID: \"5e4c2e9b-6668-483f-8f27-6271ee8c3250\") " pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: E1123 00:19:59.127568 4743 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df6008c_cc2b_4422_a7d4_c02b91c052a6.slice/crio-conmon-f3bd98428f1292f4a1404915abb37167ca336b3c4b98865b7df98f9213dcb7eb.scope\": RecentStats: unable to find data in memory cache]" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.173684 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.506069 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twgjs" event={"ID":"8d010bb3-db87-45a5-92bf-2e2bc375534e","Type":"ContainerDied","Data":"9506b6a069c0a030b3eb3d497c4e83bb249a020af57813ba84151546cf49cfb2"} Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.506422 4743 scope.go:117] "RemoveContainer" containerID="c0880847d0fa2260bb0b3e7f0973b86e9cd6f4ed6d471973594cd7a4bd703997" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.506148 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twgjs" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.511916 4743 generic.go:334] "Generic (PLEG): container finished" podID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerID="f3bd98428f1292f4a1404915abb37167ca336b3c4b98865b7df98f9213dcb7eb" exitCode=0 Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.512004 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" event={"ID":"7df6008c-cc2b-4422-a7d4-c02b91c052a6","Type":"ContainerDied","Data":"f3bd98428f1292f4a1404915abb37167ca336b3c4b98865b7df98f9213dcb7eb"} Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.547654 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twgjs"] Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.552361 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twgjs"] Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.568206 4743 scope.go:117] "RemoveContainer" containerID="3ed5d8902735fcda1ab5fa8dab8d9c0e0f21c59620ce71cd3e9a84980e0e3c66" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.599878 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.665623 4743 scope.go:117] "RemoveContainer" containerID="b1cf5547ddb2843a61eb94b8fea42e0f36d69b8bf962790e16f6e139954ff694" Nov 23 00:19:59 crc kubenswrapper[4743]: I1123 00:19:59.738362 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-5c96b8969b-j99fz"] Nov 23 00:20:00 crc kubenswrapper[4743]: I1123 00:20:00.523056 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" event={"ID":"5e4c2e9b-6668-483f-8f27-6271ee8c3250","Type":"ContainerStarted","Data":"469a4d8d578f440fbb79d1cbd667877022b2ca63d57bc75014a0488ffb84ae37"} Nov 23 00:20:00 crc kubenswrapper[4743]: I1123 00:20:00.746873 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d010bb3-db87-45a5-92bf-2e2bc375534e" path="/var/lib/kubelet/pods/8d010bb3-db87-45a5-92bf-2e2bc375534e/volumes" Nov 23 00:20:00 crc kubenswrapper[4743]: I1123 00:20:00.980329 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.080877 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-util\") pod \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.080957 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-bundle\") pod \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.081059 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk8vx\" (UniqueName: \"kubernetes.io/projected/7df6008c-cc2b-4422-a7d4-c02b91c052a6-kube-api-access-kk8vx\") pod \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\" (UID: \"7df6008c-cc2b-4422-a7d4-c02b91c052a6\") " Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.085063 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-bundle" (OuterVolumeSpecName: "bundle") pod "7df6008c-cc2b-4422-a7d4-c02b91c052a6" (UID: "7df6008c-cc2b-4422-a7d4-c02b91c052a6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.088333 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df6008c-cc2b-4422-a7d4-c02b91c052a6-kube-api-access-kk8vx" (OuterVolumeSpecName: "kube-api-access-kk8vx") pod "7df6008c-cc2b-4422-a7d4-c02b91c052a6" (UID: "7df6008c-cc2b-4422-a7d4-c02b91c052a6"). InnerVolumeSpecName "kube-api-access-kk8vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.096846 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-util" (OuterVolumeSpecName: "util") pod "7df6008c-cc2b-4422-a7d4-c02b91c052a6" (UID: "7df6008c-cc2b-4422-a7d4-c02b91c052a6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.182693 4743 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.182736 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk8vx\" (UniqueName: \"kubernetes.io/projected/7df6008c-cc2b-4422-a7d4-c02b91c052a6-kube-api-access-kk8vx\") on node \"crc\" DevicePath \"\"" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.182749 4743 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7df6008c-cc2b-4422-a7d4-c02b91c052a6-util\") on node \"crc\" DevicePath \"\"" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.543459 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" event={"ID":"7df6008c-cc2b-4422-a7d4-c02b91c052a6","Type":"ContainerDied","Data":"9268fd0007f35927a9e11a4dac3e4344c17532be57fd5557a1388dc56f9f80e2"} Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.543771 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9268fd0007f35927a9e11a4dac3e4344c17532be57fd5557a1388dc56f9f80e2" Nov 23 00:20:01 crc kubenswrapper[4743]: I1123 00:20:01.543623 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.492844 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xhnrj"] Nov 23 00:20:02 crc kubenswrapper[4743]: E1123 00:20:02.493078 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="extract" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.493095 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="extract" Nov 23 00:20:02 crc kubenswrapper[4743]: E1123 00:20:02.493117 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="util" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.493126 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="util" Nov 23 00:20:02 crc kubenswrapper[4743]: E1123 00:20:02.493135 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="pull" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.493143 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="pull" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.493260 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df6008c-cc2b-4422-a7d4-c02b91c052a6" containerName="extract" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.493739 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.496402 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-txdkd" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.501272 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xhnrj"] Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.616641 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fqs\" (UniqueName: \"kubernetes.io/projected/7e977f59-3c45-4fb5-afc3-4e0595735899-kube-api-access-58fqs\") pod \"interconnect-operator-5bb49f789d-xhnrj\" (UID: \"7e977f59-3c45-4fb5-afc3-4e0595735899\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.721542 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fqs\" (UniqueName: \"kubernetes.io/projected/7e977f59-3c45-4fb5-afc3-4e0595735899-kube-api-access-58fqs\") pod \"interconnect-operator-5bb49f789d-xhnrj\" (UID: \"7e977f59-3c45-4fb5-afc3-4e0595735899\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.740017 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fqs\" (UniqueName: \"kubernetes.io/projected/7e977f59-3c45-4fb5-afc3-4e0595735899-kube-api-access-58fqs\") pod \"interconnect-operator-5bb49f789d-xhnrj\" (UID: \"7e977f59-3c45-4fb5-afc3-4e0595735899\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" Nov 23 00:20:02 crc kubenswrapper[4743]: I1123 00:20:02.816737 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" Nov 23 00:20:03 crc kubenswrapper[4743]: I1123 00:20:03.753085 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxrz8"] Nov 23 00:20:03 crc kubenswrapper[4743]: I1123 00:20:03.753540 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kxrz8" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="registry-server" containerID="cri-o://3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5" gracePeriod=2 Nov 23 00:20:04 crc kubenswrapper[4743]: I1123 00:20:04.572241 4743 generic.go:334] "Generic (PLEG): container finished" podID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerID="3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5" exitCode=0 Nov 23 00:20:04 crc kubenswrapper[4743]: I1123 00:20:04.572419 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerDied","Data":"3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5"} Nov 23 00:20:08 crc kubenswrapper[4743]: E1123 00:20:08.685676 4743 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5 is running failed: container process not found" containerID="3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 00:20:08 crc kubenswrapper[4743]: E1123 00:20:08.686429 4743 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5 is running failed: container process not found" containerID="3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 00:20:08 crc kubenswrapper[4743]: E1123 00:20:08.686971 4743 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5 is running failed: container process not found" containerID="3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 00:20:08 crc kubenswrapper[4743]: E1123 00:20:08.687011 4743 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-kxrz8" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="registry-server" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.320498 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.469385 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjfc8\" (UniqueName: \"kubernetes.io/projected/7e51d2b8-42ec-4508-8092-02949efcc06d-kube-api-access-wjfc8\") pod \"7e51d2b8-42ec-4508-8092-02949efcc06d\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.469538 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-catalog-content\") pod \"7e51d2b8-42ec-4508-8092-02949efcc06d\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.469637 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-utilities\") pod \"7e51d2b8-42ec-4508-8092-02949efcc06d\" (UID: \"7e51d2b8-42ec-4508-8092-02949efcc06d\") " Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.470876 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-utilities" (OuterVolumeSpecName: "utilities") pod "7e51d2b8-42ec-4508-8092-02949efcc06d" (UID: "7e51d2b8-42ec-4508-8092-02949efcc06d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.491924 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e51d2b8-42ec-4508-8092-02949efcc06d-kube-api-access-wjfc8" (OuterVolumeSpecName: "kube-api-access-wjfc8") pod "7e51d2b8-42ec-4508-8092-02949efcc06d" (UID: "7e51d2b8-42ec-4508-8092-02949efcc06d"). InnerVolumeSpecName "kube-api-access-wjfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.547861 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e51d2b8-42ec-4508-8092-02949efcc06d" (UID: "7e51d2b8-42ec-4508-8092-02949efcc06d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.571579 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.571621 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e51d2b8-42ec-4508-8092-02949efcc06d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.571635 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjfc8\" (UniqueName: \"kubernetes.io/projected/7e51d2b8-42ec-4508-8092-02949efcc06d-kube-api-access-wjfc8\") on node \"crc\" DevicePath \"\"" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.615388 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxrz8" event={"ID":"7e51d2b8-42ec-4508-8092-02949efcc06d","Type":"ContainerDied","Data":"56c061bbf0396a04c617559675355761cfb30145e45f7a62c5ec708e7054675d"} Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.615439 4743 scope.go:117] "RemoveContainer" containerID="3bdd3c4aa1ad62e746893cf178b29115b751bebca85e05a3d83737de610ee3e5" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.615544 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxrz8" Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.646495 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxrz8"] Nov 23 00:20:09 crc kubenswrapper[4743]: I1123 00:20:09.650375 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kxrz8"] Nov 23 00:20:10 crc kubenswrapper[4743]: I1123 00:20:10.731581 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" path="/var/lib/kubelet/pods/7e51d2b8-42ec-4508-8092-02949efcc06d/volumes" Nov 23 00:20:13 crc kubenswrapper[4743]: E1123 00:20:13.714038 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Nov 23 00:20:13 crc kubenswrapper[4743]: E1123 00:20:13.714780 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jsqsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-dkd8f_openshift-operators(ba332c3a-1550-4dba-856c-13ec50c7f04a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:20:13 crc kubenswrapper[4743]: E1123 00:20:13.716002 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" podUID="ba332c3a-1550-4dba-856c-13ec50c7f04a" Nov 23 00:20:13 crc kubenswrapper[4743]: I1123 00:20:13.761311 4743 scope.go:117] "RemoveContainer" containerID="26201969135e6d76b00bea74816075b7e974c7a5d4a4a51dd1048f9e9845afed" Nov 23 00:20:13 crc kubenswrapper[4743]: I1123 00:20:13.791399 4743 scope.go:117] "RemoveContainer" containerID="ab6a6c33a4f965e42ee15d931ca53daf208f7a6024dba445bde7e2489cf9df74" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.227353 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xhnrj"] Nov 23 00:20:14 crc kubenswrapper[4743]: W1123 00:20:14.235425 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e977f59_3c45_4fb5_afc3_4e0595735899.slice/crio-768e5de6f55fc22a19d487ad5c746190e485b893f3bb7aa4b23d1d69ce889c8f WatchSource:0}: Error finding container 768e5de6f55fc22a19d487ad5c746190e485b893f3bb7aa4b23d1d69ce889c8f: Status 404 returned error can't find the container with id 768e5de6f55fc22a19d487ad5c746190e485b893f3bb7aa4b23d1d69ce889c8f Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.654937 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" event={"ID":"69e0aa0b-787b-4283-9e2c-3bdad984d8c0","Type":"ContainerStarted","Data":"31ba49758c25d57d9aa3c06bba11c1ac386e2a126f61488f016801f15950f3ec"} Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.655246 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.657057 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" event={"ID":"5e4c2e9b-6668-483f-8f27-6271ee8c3250","Type":"ContainerStarted","Data":"8ab1f7cf587177b49eb074c2eca9a36e3730e1c073c321825ea29f4f51336174"} Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.658416 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" event={"ID":"7e977f59-3c45-4fb5-afc3-4e0595735899","Type":"ContainerStarted","Data":"768e5de6f55fc22a19d487ad5c746190e485b893f3bb7aa4b23d1d69ce889c8f"} Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.659879 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" event={"ID":"df083e33-3f8a-4094-8e61-3ce2fd8cea48","Type":"ContainerStarted","Data":"d7d01216b7c6b8b56b9e1e056942333bd69a76e77d18fa06c84e00bc798c2818"} Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.661324 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-28x4p" event={"ID":"b01c1d4f-e027-408d-9d5f-c7c7006aa50f","Type":"ContainerStarted","Data":"1c855a78a49ce3d7bde7d3aa9f2aabeaa331bd436f053628ebce66c056606f2d"} Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.661477 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.662734 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" event={"ID":"a8b59a3a-47f6-4efb-8851-d64094821b88","Type":"ContainerStarted","Data":"6c3255db3ad0e8c29266fbec468b9bc96e4890df8b5204196855e682a216da32"} Nov 23 00:20:14 crc kubenswrapper[4743]: E1123 00:20:14.664391 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" podUID="ba332c3a-1550-4dba-856c-13ec50c7f04a" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.687497 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.690147 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-x8gdr" podStartSLOduration=4.134736347 podStartE2EDuration="21.690125671s" podCreationTimestamp="2025-11-23 00:19:53 +0000 UTC" firstStartedPulling="2025-11-23 00:19:56.357166852 +0000 UTC m=+788.435264979" lastFinishedPulling="2025-11-23 00:20:13.912556176 +0000 UTC m=+805.990654303" observedRunningTime="2025-11-23 00:20:14.689033884 +0000 UTC m=+806.767132021" watchObservedRunningTime="2025-11-23 00:20:14.690125671 +0000 UTC m=+806.768223798" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.717140 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-28x4p" podStartSLOduration=4.215508616 podStartE2EDuration="21.717109619s" podCreationTimestamp="2025-11-23 00:19:53 +0000 UTC" firstStartedPulling="2025-11-23 00:19:56.357128911 +0000 UTC m=+788.435227038" lastFinishedPulling="2025-11-23 00:20:13.858729914 +0000 UTC m=+805.936828041" observedRunningTime="2025-11-23 00:20:14.715895569 +0000 UTC m=+806.793993736" watchObservedRunningTime="2025-11-23 00:20:14.717109619 +0000 UTC m=+806.795207746" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.736184 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq" podStartSLOduration=1.755743395 podStartE2EDuration="21.736162904s" podCreationTimestamp="2025-11-23 00:19:53 +0000 UTC" firstStartedPulling="2025-11-23 00:19:53.798646752 +0000 UTC m=+785.876744889" lastFinishedPulling="2025-11-23 00:20:13.779066271 +0000 UTC m=+805.857164398" observedRunningTime="2025-11-23 00:20:14.735273372 +0000 UTC m=+806.813371509" watchObservedRunningTime="2025-11-23 00:20:14.736162904 +0000 UTC m=+806.814261031" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.773083 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb" podStartSLOduration=2.001699002 podStartE2EDuration="21.773062703s" podCreationTimestamp="2025-11-23 00:19:53 +0000 UTC" firstStartedPulling="2025-11-23 00:19:54.087201119 +0000 UTC m=+786.165299246" lastFinishedPulling="2025-11-23 00:20:13.85856482 +0000 UTC m=+805.936662947" observedRunningTime="2025-11-23 00:20:14.772309045 +0000 UTC m=+806.850407192" watchObservedRunningTime="2025-11-23 00:20:14.773062703 +0000 UTC m=+806.851160830" Nov 23 00:20:14 crc kubenswrapper[4743]: I1123 00:20:14.855955 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-5c96b8969b-j99fz" podStartSLOduration=2.938363202 podStartE2EDuration="16.855939015s" podCreationTimestamp="2025-11-23 00:19:58 +0000 UTC" firstStartedPulling="2025-11-23 00:19:59.811727544 +0000 UTC m=+791.889825671" lastFinishedPulling="2025-11-23 00:20:13.729303357 +0000 UTC m=+805.807401484" observedRunningTime="2025-11-23 00:20:14.853915565 +0000 UTC m=+806.932013712" watchObservedRunningTime="2025-11-23 00:20:14.855939015 +0000 UTC m=+806.934037142" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.255962 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 23 00:20:15 crc kubenswrapper[4743]: E1123 00:20:15.256239 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="extract-content" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.256259 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="extract-content" Nov 23 00:20:15 crc kubenswrapper[4743]: E1123 00:20:15.256276 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="extract-utilities" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.256285 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="extract-utilities" Nov 23 00:20:15 crc kubenswrapper[4743]: E1123 00:20:15.256304 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="registry-server" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.256312 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="registry-server" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.256438 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e51d2b8-42ec-4508-8092-02949efcc06d" containerName="registry-server" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.257457 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.261739 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.261746 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-zzd8m" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.261922 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.261952 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.261957 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.262163 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.264163 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.264594 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.274237 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.286243 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358308 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a08f9692-946d-486d-97ef-080401aabf58-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358365 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358394 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358425 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358578 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358618 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358667 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358710 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358735 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358757 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358778 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358861 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358900 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358954 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.358979 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460730 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a08f9692-946d-486d-97ef-080401aabf58-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460782 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460809 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460837 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460873 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460891 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460922 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460949 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460965 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.460983 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461001 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461020 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461037 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461059 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461075 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461609 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461638 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461849 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.461855 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.462164 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.462197 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.462729 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.464512 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.467573 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a08f9692-946d-486d-97ef-080401aabf58-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.467705 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.467869 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.468282 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.469028 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.469982 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.470145 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a08f9692-946d-486d-97ef-080401aabf58-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a08f9692-946d-486d-97ef-080401aabf58\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:15 crc kubenswrapper[4743]: I1123 00:20:15.579838 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.271011 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.685456 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a08f9692-946d-486d-97ef-080401aabf58","Type":"ContainerStarted","Data":"3cc7690c7fa2ba7a26fcc07be3eff2d3742b524ed69e11b0374cc57450988924"} Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.965901 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm"] Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.966896 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.970414 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.970711 4743 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-6rzsl" Nov 23 00:20:16 crc kubenswrapper[4743]: I1123 00:20:16.980042 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.002614 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm"] Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.096962 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d4f852-04b8-4483-9750-ff620a12a379-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wf5tm\" (UID: \"74d4f852-04b8-4483-9750-ff620a12a379\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.097245 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhgf2\" (UniqueName: \"kubernetes.io/projected/74d4f852-04b8-4483-9750-ff620a12a379-kube-api-access-qhgf2\") pod \"cert-manager-operator-controller-manager-5446d6888b-wf5tm\" (UID: \"74d4f852-04b8-4483-9750-ff620a12a379\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.199206 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhgf2\" (UniqueName: \"kubernetes.io/projected/74d4f852-04b8-4483-9750-ff620a12a379-kube-api-access-qhgf2\") pod \"cert-manager-operator-controller-manager-5446d6888b-wf5tm\" (UID: \"74d4f852-04b8-4483-9750-ff620a12a379\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.199278 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d4f852-04b8-4483-9750-ff620a12a379-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wf5tm\" (UID: \"74d4f852-04b8-4483-9750-ff620a12a379\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.200041 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74d4f852-04b8-4483-9750-ff620a12a379-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wf5tm\" (UID: \"74d4f852-04b8-4483-9750-ff620a12a379\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.223028 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhgf2\" (UniqueName: \"kubernetes.io/projected/74d4f852-04b8-4483-9750-ff620a12a379-kube-api-access-qhgf2\") pod \"cert-manager-operator-controller-manager-5446d6888b-wf5tm\" (UID: \"74d4f852-04b8-4483-9750-ff620a12a379\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.286133 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" Nov 23 00:20:17 crc kubenswrapper[4743]: I1123 00:20:17.740791 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm"] Nov 23 00:20:17 crc kubenswrapper[4743]: W1123 00:20:17.756753 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74d4f852_04b8_4483_9750_ff620a12a379.slice/crio-206b0515d0be355161b9695e55c9ff9a1eeab351dcbb3c138defcd8b6d816fcb WatchSource:0}: Error finding container 206b0515d0be355161b9695e55c9ff9a1eeab351dcbb3c138defcd8b6d816fcb: Status 404 returned error can't find the container with id 206b0515d0be355161b9695e55c9ff9a1eeab351dcbb3c138defcd8b6d816fcb Nov 23 00:20:18 crc kubenswrapper[4743]: I1123 00:20:18.704729 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" event={"ID":"74d4f852-04b8-4483-9750-ff620a12a379","Type":"ContainerStarted","Data":"206b0515d0be355161b9695e55c9ff9a1eeab351dcbb3c138defcd8b6d816fcb"} Nov 23 00:20:23 crc kubenswrapper[4743]: I1123 00:20:23.690601 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:20:23 crc kubenswrapper[4743]: I1123 00:20:23.691547 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:20:23 crc kubenswrapper[4743]: I1123 00:20:23.691635 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:20:23 crc kubenswrapper[4743]: I1123 00:20:23.692512 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"052e275822d2ee2fb1b2c9a5a7391cecc2e3d47d664aaa005c530fa35f4013d9"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:20:23 crc kubenswrapper[4743]: I1123 00:20:23.692571 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://052e275822d2ee2fb1b2c9a5a7391cecc2e3d47d664aaa005c530fa35f4013d9" gracePeriod=600 Nov 23 00:20:23 crc kubenswrapper[4743]: I1123 00:20:23.849280 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-28x4p" Nov 23 00:20:24 crc kubenswrapper[4743]: I1123 00:20:24.765310 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="052e275822d2ee2fb1b2c9a5a7391cecc2e3d47d664aaa005c530fa35f4013d9" exitCode=0 Nov 23 00:20:24 crc kubenswrapper[4743]: I1123 00:20:24.765357 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"052e275822d2ee2fb1b2c9a5a7391cecc2e3d47d664aaa005c530fa35f4013d9"} Nov 23 00:20:24 crc kubenswrapper[4743]: I1123 00:20:24.765446 4743 scope.go:117] "RemoveContainer" containerID="ac4a531f9521e82f7c0f94fe0a679c468fadebb72ad0795bf5932aa8b3bb78e4" Nov 23 00:20:39 crc kubenswrapper[4743]: I1123 00:20:39.903122 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"016ef81e58c130b632f512df7f81288af04b6c5a10c9f9bb3144f24506f08545"} Nov 23 00:20:40 crc kubenswrapper[4743]: E1123 00:20:40.002194 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Nov 23 00:20:40 crc kubenswrapper[4743]: E1123 00:20:40.002396 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(a08f9692-946d-486d-97ef-080401aabf58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 00:20:40 crc kubenswrapper[4743]: E1123 00:20:40.004750 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="a08f9692-946d-486d-97ef-080401aabf58" Nov 23 00:20:40 crc kubenswrapper[4743]: I1123 00:20:40.911825 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" event={"ID":"ba332c3a-1550-4dba-856c-13ec50c7f04a","Type":"ContainerStarted","Data":"f0d43aa191d7c9b65d0658093ce4080d1e8549c17e566ba77e3945de630260ff"} Nov 23 00:20:40 crc kubenswrapper[4743]: I1123 00:20:40.915238 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" event={"ID":"74d4f852-04b8-4483-9750-ff620a12a379","Type":"ContainerStarted","Data":"b1dfcafbad24bafa410340cebc852859ee65e9f460630d198500c8ea6ca1abc1"} Nov 23 00:20:40 crc kubenswrapper[4743]: I1123 00:20:40.920073 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" event={"ID":"7e977f59-3c45-4fb5-afc3-4e0595735899","Type":"ContainerStarted","Data":"fda9016811f3574af3e2f758ddeff51f35ba332254c494a823c6cce9f0ce2321"} Nov 23 00:20:40 crc kubenswrapper[4743]: E1123 00:20:40.920687 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="a08f9692-946d-486d-97ef-080401aabf58" Nov 23 00:20:40 crc kubenswrapper[4743]: I1123 00:20:40.940620 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dkd8f" podStartSLOduration=2.953396834 podStartE2EDuration="48.940587807s" podCreationTimestamp="2025-11-23 00:19:52 +0000 UTC" firstStartedPulling="2025-11-23 00:19:53.893550906 +0000 UTC m=+785.971649033" lastFinishedPulling="2025-11-23 00:20:39.880741879 +0000 UTC m=+831.958840006" observedRunningTime="2025-11-23 00:20:40.938393614 +0000 UTC m=+833.016491811" watchObservedRunningTime="2025-11-23 00:20:40.940587807 +0000 UTC m=+833.018685984" Nov 23 00:20:40 crc kubenswrapper[4743]: I1123 00:20:40.977130 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-xhnrj" podStartSLOduration=14.459884383 podStartE2EDuration="38.977085117s" podCreationTimestamp="2025-11-23 00:20:02 +0000 UTC" firstStartedPulling="2025-11-23 00:20:14.237998394 +0000 UTC m=+806.316096511" lastFinishedPulling="2025-11-23 00:20:38.755199118 +0000 UTC m=+830.833297245" observedRunningTime="2025-11-23 00:20:40.968257572 +0000 UTC m=+833.046355709" watchObservedRunningTime="2025-11-23 00:20:40.977085117 +0000 UTC m=+833.055183314" Nov 23 00:20:41 crc kubenswrapper[4743]: I1123 00:20:41.054060 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wf5tm" podStartSLOduration=15.019226578 podStartE2EDuration="25.054035534s" podCreationTimestamp="2025-11-23 00:20:16 +0000 UTC" firstStartedPulling="2025-11-23 00:20:17.760078773 +0000 UTC m=+809.838176900" lastFinishedPulling="2025-11-23 00:20:27.794887729 +0000 UTC m=+819.872985856" observedRunningTime="2025-11-23 00:20:41.012256195 +0000 UTC m=+833.090354342" watchObservedRunningTime="2025-11-23 00:20:41.054035534 +0000 UTC m=+833.132133671" Nov 23 00:20:41 crc kubenswrapper[4743]: I1123 00:20:41.190811 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 23 00:20:41 crc kubenswrapper[4743]: I1123 00:20:41.231310 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 23 00:20:41 crc kubenswrapper[4743]: E1123 00:20:41.926118 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="a08f9692-946d-486d-97ef-080401aabf58" Nov 23 00:20:42 crc kubenswrapper[4743]: E1123 00:20:42.931504 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="a08f9692-946d-486d-97ef-080401aabf58" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.471391 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7rvjt"] Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.472297 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.474513 4743 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bvbhw" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.475653 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.481581 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.483156 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7rvjt"] Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.546605 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c6aa4f6-06ad-43da-afcb-a4f489468654-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7rvjt\" (UID: \"4c6aa4f6-06ad-43da-afcb-a4f489468654\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.546659 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtwzh\" (UniqueName: \"kubernetes.io/projected/4c6aa4f6-06ad-43da-afcb-a4f489468654-kube-api-access-dtwzh\") pod \"cert-manager-webhook-f4fb5df64-7rvjt\" (UID: \"4c6aa4f6-06ad-43da-afcb-a4f489468654\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.648127 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtwzh\" (UniqueName: \"kubernetes.io/projected/4c6aa4f6-06ad-43da-afcb-a4f489468654-kube-api-access-dtwzh\") pod \"cert-manager-webhook-f4fb5df64-7rvjt\" (UID: \"4c6aa4f6-06ad-43da-afcb-a4f489468654\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.648240 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c6aa4f6-06ad-43da-afcb-a4f489468654-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7rvjt\" (UID: \"4c6aa4f6-06ad-43da-afcb-a4f489468654\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.669888 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c6aa4f6-06ad-43da-afcb-a4f489468654-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7rvjt\" (UID: \"4c6aa4f6-06ad-43da-afcb-a4f489468654\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.682640 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtwzh\" (UniqueName: \"kubernetes.io/projected/4c6aa4f6-06ad-43da-afcb-a4f489468654-kube-api-access-dtwzh\") pod \"cert-manager-webhook-f4fb5df64-7rvjt\" (UID: \"4c6aa4f6-06ad-43da-afcb-a4f489468654\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:43 crc kubenswrapper[4743]: I1123 00:20:43.794772 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:44 crc kubenswrapper[4743]: I1123 00:20:44.225169 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7rvjt"] Nov 23 00:20:44 crc kubenswrapper[4743]: I1123 00:20:44.971793 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" event={"ID":"4c6aa4f6-06ad-43da-afcb-a4f489468654","Type":"ContainerStarted","Data":"f3799271c3a5e0f8ec67535196b5d448fc33f5e0080e0ff5f3af7351b6b3a511"} Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.191504 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d"] Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.193204 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.196859 4743 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8hnhl" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.223866 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d"] Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.309628 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwvz\" (UniqueName: \"kubernetes.io/projected/2f55cc68-e977-46ce-8299-a57d98984025-kube-api-access-bcwvz\") pod \"cert-manager-cainjector-855d9ccff4-g8t2d\" (UID: \"2f55cc68-e977-46ce-8299-a57d98984025\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.309692 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f55cc68-e977-46ce-8299-a57d98984025-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-g8t2d\" (UID: \"2f55cc68-e977-46ce-8299-a57d98984025\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.411686 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwvz\" (UniqueName: \"kubernetes.io/projected/2f55cc68-e977-46ce-8299-a57d98984025-kube-api-access-bcwvz\") pod \"cert-manager-cainjector-855d9ccff4-g8t2d\" (UID: \"2f55cc68-e977-46ce-8299-a57d98984025\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.411762 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f55cc68-e977-46ce-8299-a57d98984025-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-g8t2d\" (UID: \"2f55cc68-e977-46ce-8299-a57d98984025\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.447338 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f55cc68-e977-46ce-8299-a57d98984025-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-g8t2d\" (UID: \"2f55cc68-e977-46ce-8299-a57d98984025\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.458413 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwvz\" (UniqueName: \"kubernetes.io/projected/2f55cc68-e977-46ce-8299-a57d98984025-kube-api-access-bcwvz\") pod \"cert-manager-cainjector-855d9ccff4-g8t2d\" (UID: \"2f55cc68-e977-46ce-8299-a57d98984025\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:47 crc kubenswrapper[4743]: I1123 00:20:47.512644 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" Nov 23 00:20:48 crc kubenswrapper[4743]: I1123 00:20:48.134340 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d"] Nov 23 00:20:49 crc kubenswrapper[4743]: I1123 00:20:49.014884 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" event={"ID":"2f55cc68-e977-46ce-8299-a57d98984025","Type":"ContainerStarted","Data":"0f81ebd8f96eb3367dada65f022faba163e4acde83ac02291ce9d5d1dbe6f281"} Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.361371 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.364066 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.367003 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.367328 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.367323 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.367370 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.384577 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463221 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463346 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463390 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463441 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463536 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfr4m\" (UniqueName: \"kubernetes.io/projected/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-kube-api-access-cfr4m\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463565 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463597 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463618 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463641 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463706 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463739 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.463760 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.564979 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565027 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565051 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfr4m\" (UniqueName: \"kubernetes.io/projected/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-kube-api-access-cfr4m\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565078 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565095 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565110 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565131 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565152 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565186 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565212 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565237 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565252 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.565978 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.566055 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.566311 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.566676 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.566740 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.566907 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.566950 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.567001 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.567052 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.581883 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.585298 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfr4m\" (UniqueName: \"kubernetes.io/projected/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-kube-api-access-cfr4m\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.589435 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-1-build\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:50 crc kubenswrapper[4743]: I1123 00:20:50.696317 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.579796 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8c446"] Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.581134 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.582900 4743 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dzt99" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.583837 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8c446"] Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.662538 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70528ba7-8ac2-4d82-b61e-22d639fe36ab-bound-sa-token\") pod \"cert-manager-86cb77c54b-8c446\" (UID: \"70528ba7-8ac2-4d82-b61e-22d639fe36ab\") " pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.662759 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hnn\" (UniqueName: \"kubernetes.io/projected/70528ba7-8ac2-4d82-b61e-22d639fe36ab-kube-api-access-h9hnn\") pod \"cert-manager-86cb77c54b-8c446\" (UID: \"70528ba7-8ac2-4d82-b61e-22d639fe36ab\") " pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.677245 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.763973 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hnn\" (UniqueName: \"kubernetes.io/projected/70528ba7-8ac2-4d82-b61e-22d639fe36ab-kube-api-access-h9hnn\") pod \"cert-manager-86cb77c54b-8c446\" (UID: \"70528ba7-8ac2-4d82-b61e-22d639fe36ab\") " pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.764034 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70528ba7-8ac2-4d82-b61e-22d639fe36ab-bound-sa-token\") pod \"cert-manager-86cb77c54b-8c446\" (UID: \"70528ba7-8ac2-4d82-b61e-22d639fe36ab\") " pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.782241 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hnn\" (UniqueName: \"kubernetes.io/projected/70528ba7-8ac2-4d82-b61e-22d639fe36ab-kube-api-access-h9hnn\") pod \"cert-manager-86cb77c54b-8c446\" (UID: \"70528ba7-8ac2-4d82-b61e-22d639fe36ab\") " pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.783042 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70528ba7-8ac2-4d82-b61e-22d639fe36ab-bound-sa-token\") pod \"cert-manager-86cb77c54b-8c446\" (UID: \"70528ba7-8ac2-4d82-b61e-22d639fe36ab\") " pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:54 crc kubenswrapper[4743]: I1123 00:20:54.895945 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-8c446" Nov 23 00:20:55 crc kubenswrapper[4743]: W1123 00:20:55.114983 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34bb7b9a_9d08_44e5_aa6b_7ac484e73ffd.slice/crio-794c2dbf60022ed004743cb538145350e3005cbfa394cc4a069fce28f2fa6eb8 WatchSource:0}: Error finding container 794c2dbf60022ed004743cb538145350e3005cbfa394cc4a069fce28f2fa6eb8: Status 404 returned error can't find the container with id 794c2dbf60022ed004743cb538145350e3005cbfa394cc4a069fce28f2fa6eb8 Nov 23 00:20:55 crc kubenswrapper[4743]: I1123 00:20:55.632445 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8c446"] Nov 23 00:20:55 crc kubenswrapper[4743]: W1123 00:20:55.654107 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70528ba7_8ac2_4d82_b61e_22d639fe36ab.slice/crio-805df80c32d577b5804b0d2f73d669c47c2a8c2513f9ab947f9dcd512f9787b0 WatchSource:0}: Error finding container 805df80c32d577b5804b0d2f73d669c47c2a8c2513f9ab947f9dcd512f9787b0: Status 404 returned error can't find the container with id 805df80c32d577b5804b0d2f73d669c47c2a8c2513f9ab947f9dcd512f9787b0 Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.058792 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a08f9692-946d-486d-97ef-080401aabf58","Type":"ContainerStarted","Data":"c5be6452d4eaa477884caf3eea6da9484b1d80412f1b7af9366405aa6d7859b6"} Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.060162 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" event={"ID":"2f55cc68-e977-46ce-8299-a57d98984025","Type":"ContainerStarted","Data":"f39753fa84b6f7793639f07291ee615a065b7b96b4e94585a13512ce1f6b3583"} Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.073597 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" event={"ID":"4c6aa4f6-06ad-43da-afcb-a4f489468654","Type":"ContainerStarted","Data":"d17fc5dda1eded4dc7304c66ea2890b23cc5fb66267a0edeb21bb0fbd7d2fd56"} Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.073736 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.075656 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd","Type":"ContainerStarted","Data":"794c2dbf60022ed004743cb538145350e3005cbfa394cc4a069fce28f2fa6eb8"} Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.076908 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-8c446" event={"ID":"70528ba7-8ac2-4d82-b61e-22d639fe36ab","Type":"ContainerStarted","Data":"d9b0ce1e8b962e22f885684c883f40e74dee6471e9b7d0e0f8456825d88497ac"} Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.076932 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-8c446" event={"ID":"70528ba7-8ac2-4d82-b61e-22d639fe36ab","Type":"ContainerStarted","Data":"805df80c32d577b5804b0d2f73d669c47c2a8c2513f9ab947f9dcd512f9787b0"} Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.113842 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" podStartSLOduration=2.16053019 podStartE2EDuration="13.113829133s" podCreationTimestamp="2025-11-23 00:20:43 +0000 UTC" firstStartedPulling="2025-11-23 00:20:44.232066392 +0000 UTC m=+836.310164519" lastFinishedPulling="2025-11-23 00:20:55.185365325 +0000 UTC m=+847.263463462" observedRunningTime="2025-11-23 00:20:56.112839659 +0000 UTC m=+848.190937786" watchObservedRunningTime="2025-11-23 00:20:56.113829133 +0000 UTC m=+848.191927260" Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.129246 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-8c446" podStartSLOduration=2.12922406 podStartE2EDuration="2.12922406s" podCreationTimestamp="2025-11-23 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:20:56.125037498 +0000 UTC m=+848.203135635" watchObservedRunningTime="2025-11-23 00:20:56.12922406 +0000 UTC m=+848.207322187" Nov 23 00:20:56 crc kubenswrapper[4743]: I1123 00:20:56.143574 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-g8t2d" podStartSLOduration=2.119823266 podStartE2EDuration="9.14355326s" podCreationTimestamp="2025-11-23 00:20:47 +0000 UTC" firstStartedPulling="2025-11-23 00:20:48.178427352 +0000 UTC m=+840.256525479" lastFinishedPulling="2025-11-23 00:20:55.202157346 +0000 UTC m=+847.280255473" observedRunningTime="2025-11-23 00:20:56.139923302 +0000 UTC m=+848.218021429" watchObservedRunningTime="2025-11-23 00:20:56.14355326 +0000 UTC m=+848.221651387" Nov 23 00:20:58 crc kubenswrapper[4743]: I1123 00:20:58.092738 4743 generic.go:334] "Generic (PLEG): container finished" podID="a08f9692-946d-486d-97ef-080401aabf58" containerID="c5be6452d4eaa477884caf3eea6da9484b1d80412f1b7af9366405aa6d7859b6" exitCode=0 Nov 23 00:20:58 crc kubenswrapper[4743]: I1123 00:20:58.092790 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a08f9692-946d-486d-97ef-080401aabf58","Type":"ContainerDied","Data":"c5be6452d4eaa477884caf3eea6da9484b1d80412f1b7af9366405aa6d7859b6"} Nov 23 00:21:00 crc kubenswrapper[4743]: I1123 00:21:00.912247 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.662278 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.664197 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.667314 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.667950 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.668156 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.680081 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776414 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776473 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvlmc\" (UniqueName: \"kubernetes.io/projected/e3fcbc14-7917-45e5-bf42-02cd0597a890-kube-api-access-fvlmc\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776553 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776576 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776780 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776824 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776845 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776893 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.776959 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.777063 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.777086 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.777151 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878079 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878144 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878180 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878228 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878273 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878283 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878340 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878377 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878437 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878510 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878523 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878545 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvlmc\" (UniqueName: \"kubernetes.io/projected/e3fcbc14-7917-45e5-bf42-02cd0597a890-kube-api-access-fvlmc\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878582 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878620 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.878768 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.879261 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.879527 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.879623 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.879714 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.880234 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.880285 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.885631 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.890947 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.900438 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvlmc\" (UniqueName: \"kubernetes.io/projected/e3fcbc14-7917-45e5-bf42-02cd0597a890-kube-api-access-fvlmc\") pod \"service-telemetry-operator-2-build\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:02 crc kubenswrapper[4743]: I1123 00:21:02.980659 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:21:03 crc kubenswrapper[4743]: I1123 00:21:03.416578 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 23 00:21:03 crc kubenswrapper[4743]: I1123 00:21:03.797290 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-7rvjt" Nov 23 00:21:04 crc kubenswrapper[4743]: I1123 00:21:04.146421 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerStarted","Data":"3936b86c6c3248c71e4ded5c97f45646d55b42801dad6c64892c3e7f3a3237e2"} Nov 23 00:21:04 crc kubenswrapper[4743]: I1123 00:21:04.149087 4743 generic.go:334] "Generic (PLEG): container finished" podID="a08f9692-946d-486d-97ef-080401aabf58" containerID="824ff90197ac0659d84e8cfd9f70fb86754730e3f7b9ff0c96b6607730d35085" exitCode=0 Nov 23 00:21:04 crc kubenswrapper[4743]: I1123 00:21:04.149143 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a08f9692-946d-486d-97ef-080401aabf58","Type":"ContainerDied","Data":"824ff90197ac0659d84e8cfd9f70fb86754730e3f7b9ff0c96b6607730d35085"} Nov 23 00:21:05 crc kubenswrapper[4743]: I1123 00:21:05.155130 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd","Type":"ContainerStarted","Data":"e3fa937004c04d4b40d6a23343a8d500c4f58f856e51269597bb939152bd3391"} Nov 23 00:21:05 crc kubenswrapper[4743]: I1123 00:21:05.156770 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerStarted","Data":"8bb4bb396c190fff7fa0eb09daa443131e18390ae4f4af9c3b588987fb319a4c"} Nov 23 00:21:06 crc kubenswrapper[4743]: I1123 00:21:06.164741 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a08f9692-946d-486d-97ef-080401aabf58","Type":"ContainerStarted","Data":"9963cd88bcdfac5c6ec3caf1a15d452324556e2ef1b2869d9d2cff3ef6598cf5"} Nov 23 00:21:06 crc kubenswrapper[4743]: I1123 00:21:06.164811 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" containerName="manage-dockerfile" containerID="cri-o://e3fa937004c04d4b40d6a23343a8d500c4f58f856e51269597bb939152bd3391" gracePeriod=30 Nov 23 00:21:06 crc kubenswrapper[4743]: I1123 00:21:06.165340 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:21:06 crc kubenswrapper[4743]: I1123 00:21:06.270594 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=11.978760252 podStartE2EDuration="51.270576004s" podCreationTimestamp="2025-11-23 00:20:15 +0000 UTC" firstStartedPulling="2025-11-23 00:20:16.293921345 +0000 UTC m=+808.372019472" lastFinishedPulling="2025-11-23 00:20:55.585737097 +0000 UTC m=+847.663835224" observedRunningTime="2025-11-23 00:21:06.266092184 +0000 UTC m=+858.344190331" watchObservedRunningTime="2025-11-23 00:21:06.270576004 +0000 UTC m=+858.348674131" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.172591 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd/manage-dockerfile/0.log" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.172665 4743 generic.go:334] "Generic (PLEG): container finished" podID="34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" containerID="e3fa937004c04d4b40d6a23343a8d500c4f58f856e51269597bb939152bd3391" exitCode=1 Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.172768 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd","Type":"ContainerDied","Data":"e3fa937004c04d4b40d6a23343a8d500c4f58f856e51269597bb939152bd3391"} Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.387113 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd/manage-dockerfile/0.log" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.387690 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.437125 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-proxy-ca-bundles\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.437527 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-pull\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.437669 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-push\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.437765 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfr4m\" (UniqueName: \"kubernetes.io/projected/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-kube-api-access-cfr4m\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.437882 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-root\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.437982 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildworkdir\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438085 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-run\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438188 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-system-configs\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438300 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildcachedir\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438430 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-ca-bundles\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438734 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-node-pullsecrets\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438847 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-blob-cache\") pod \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\" (UID: \"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd\") " Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438478 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.438954 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.439165 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.440005 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.440127 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.440234 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.440349 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.440373 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.440743 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.445643 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-kube-api-access-cfr4m" (OuterVolumeSpecName: "kube-api-access-cfr4m") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "kube-api-access-cfr4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.445925 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.457703 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" (UID: "34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540184 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540226 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540236 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540245 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540255 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540263 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540272 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540282 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540293 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540304 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540314 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:07 crc kubenswrapper[4743]: I1123 00:21:07.540322 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfr4m\" (UniqueName: \"kubernetes.io/projected/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd-kube-api-access-cfr4m\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.181699 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd/manage-dockerfile/0.log" Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.181781 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd","Type":"ContainerDied","Data":"794c2dbf60022ed004743cb538145350e3005cbfa394cc4a069fce28f2fa6eb8"} Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.181837 4743 scope.go:117] "RemoveContainer" containerID="e3fa937004c04d4b40d6a23343a8d500c4f58f856e51269597bb939152bd3391" Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.181845 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.216817 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.225222 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 23 00:21:08 crc kubenswrapper[4743]: I1123 00:21:08.729655 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" path="/var/lib/kubelet/pods/34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd/volumes" Nov 23 00:21:15 crc kubenswrapper[4743]: I1123 00:21:15.649341 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a08f9692-946d-486d-97ef-080401aabf58" containerName="elasticsearch" probeResult="failure" output=< Nov 23 00:21:15 crc kubenswrapper[4743]: {"timestamp": "2025-11-23T00:21:15+00:00", "message": "readiness probe failed", "curl_rc": "7"} Nov 23 00:21:15 crc kubenswrapper[4743]: > Nov 23 00:21:18 crc kubenswrapper[4743]: I1123 00:21:18.246458 4743 generic.go:334] "Generic (PLEG): container finished" podID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerID="8bb4bb396c190fff7fa0eb09daa443131e18390ae4f4af9c3b588987fb319a4c" exitCode=0 Nov 23 00:21:18 crc kubenswrapper[4743]: I1123 00:21:18.246558 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerDied","Data":"8bb4bb396c190fff7fa0eb09daa443131e18390ae4f4af9c3b588987fb319a4c"} Nov 23 00:21:19 crc kubenswrapper[4743]: I1123 00:21:19.254087 4743 generic.go:334] "Generic (PLEG): container finished" podID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerID="959e8a4fe98d738f292925517b213ca4403d43fb0fd8a3e095d5f57f06327c22" exitCode=0 Nov 23 00:21:19 crc kubenswrapper[4743]: I1123 00:21:19.254173 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerDied","Data":"959e8a4fe98d738f292925517b213ca4403d43fb0fd8a3e095d5f57f06327c22"} Nov 23 00:21:19 crc kubenswrapper[4743]: I1123 00:21:19.316843 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e3fcbc14-7917-45e5-bf42-02cd0597a890/manage-dockerfile/0.log" Nov 23 00:21:20 crc kubenswrapper[4743]: I1123 00:21:20.265043 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerStarted","Data":"c529fb8e585e701ea9152e75704d9461a7a76486b7db354f24000b24a08d9680"} Nov 23 00:21:20 crc kubenswrapper[4743]: I1123 00:21:20.304662 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=18.304638395 podStartE2EDuration="18.304638395s" podCreationTimestamp="2025-11-23 00:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:21:20.294718072 +0000 UTC m=+872.372816209" watchObservedRunningTime="2025-11-23 00:21:20.304638395 +0000 UTC m=+872.382736522" Nov 23 00:21:20 crc kubenswrapper[4743]: I1123 00:21:20.652621 4743 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a08f9692-946d-486d-97ef-080401aabf58" containerName="elasticsearch" probeResult="failure" output=< Nov 23 00:21:20 crc kubenswrapper[4743]: {"timestamp": "2025-11-23T00:21:20+00:00", "message": "readiness probe failed", "curl_rc": "7"} Nov 23 00:21:20 crc kubenswrapper[4743]: > Nov 23 00:21:25 crc kubenswrapper[4743]: I1123 00:21:25.784995 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.444161 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xn4ds"] Nov 23 00:21:32 crc kubenswrapper[4743]: E1123 00:21:32.449077 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" containerName="manage-dockerfile" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.449141 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" containerName="manage-dockerfile" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.449377 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="34bb7b9a-9d08-44e5-aa6b-7ac484e73ffd" containerName="manage-dockerfile" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.451269 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.458676 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xn4ds"] Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.634830 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-utilities\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.634883 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpk2\" (UniqueName: \"kubernetes.io/projected/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-kube-api-access-dlpk2\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.634931 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-catalog-content\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.735667 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-utilities\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.735713 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpk2\" (UniqueName: \"kubernetes.io/projected/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-kube-api-access-dlpk2\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.735755 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-catalog-content\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.736248 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-catalog-content\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.736274 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-utilities\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.758163 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpk2\" (UniqueName: \"kubernetes.io/projected/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-kube-api-access-dlpk2\") pod \"community-operators-xn4ds\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:32 crc kubenswrapper[4743]: I1123 00:21:32.770106 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:33 crc kubenswrapper[4743]: I1123 00:21:33.265122 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xn4ds"] Nov 23 00:21:33 crc kubenswrapper[4743]: I1123 00:21:33.350092 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn4ds" event={"ID":"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb","Type":"ContainerStarted","Data":"4490bad4523e9bd7ef81f8f3612b9be2b895851e5f84da08f6c979b1e1802cbc"} Nov 23 00:21:34 crc kubenswrapper[4743]: I1123 00:21:34.357914 4743 generic.go:334] "Generic (PLEG): container finished" podID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerID="b342be8ff87776e47e5d09e70e75af9ae8c0e0d8119434ae819da4751114775c" exitCode=0 Nov 23 00:21:34 crc kubenswrapper[4743]: I1123 00:21:34.358269 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn4ds" event={"ID":"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb","Type":"ContainerDied","Data":"b342be8ff87776e47e5d09e70e75af9ae8c0e0d8119434ae819da4751114775c"} Nov 23 00:21:36 crc kubenswrapper[4743]: I1123 00:21:36.373643 4743 generic.go:334] "Generic (PLEG): container finished" podID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerID="d336063c6521c7d276d541cac3b48972e3525e257846cb8afe7f76d993fcf4ca" exitCode=0 Nov 23 00:21:36 crc kubenswrapper[4743]: I1123 00:21:36.373734 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn4ds" event={"ID":"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb","Type":"ContainerDied","Data":"d336063c6521c7d276d541cac3b48972e3525e257846cb8afe7f76d993fcf4ca"} Nov 23 00:21:48 crc kubenswrapper[4743]: I1123 00:21:48.470671 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn4ds" event={"ID":"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb","Type":"ContainerStarted","Data":"fe8319001efd11721ee220b1bd1296051998974c7065b4df6e6b1b1d404f3945"} Nov 23 00:21:48 crc kubenswrapper[4743]: I1123 00:21:48.493253 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xn4ds" podStartSLOduration=4.312785568 podStartE2EDuration="16.493238574s" podCreationTimestamp="2025-11-23 00:21:32 +0000 UTC" firstStartedPulling="2025-11-23 00:21:34.359646079 +0000 UTC m=+886.437744216" lastFinishedPulling="2025-11-23 00:21:46.540099105 +0000 UTC m=+898.618197222" observedRunningTime="2025-11-23 00:21:48.490872256 +0000 UTC m=+900.568970443" watchObservedRunningTime="2025-11-23 00:21:48.493238574 +0000 UTC m=+900.571336701" Nov 23 00:21:52 crc kubenswrapper[4743]: I1123 00:21:52.770638 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:52 crc kubenswrapper[4743]: I1123 00:21:52.771062 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:52 crc kubenswrapper[4743]: I1123 00:21:52.806821 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:53 crc kubenswrapper[4743]: I1123 00:21:53.545154 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:53 crc kubenswrapper[4743]: I1123 00:21:53.600008 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xn4ds"] Nov 23 00:21:55 crc kubenswrapper[4743]: I1123 00:21:55.523069 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xn4ds" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="registry-server" containerID="cri-o://fe8319001efd11721ee220b1bd1296051998974c7065b4df6e6b1b1d404f3945" gracePeriod=2 Nov 23 00:21:56 crc kubenswrapper[4743]: I1123 00:21:56.533875 4743 generic.go:334] "Generic (PLEG): container finished" podID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerID="fe8319001efd11721ee220b1bd1296051998974c7065b4df6e6b1b1d404f3945" exitCode=0 Nov 23 00:21:56 crc kubenswrapper[4743]: I1123 00:21:56.533950 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn4ds" event={"ID":"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb","Type":"ContainerDied","Data":"fe8319001efd11721ee220b1bd1296051998974c7065b4df6e6b1b1d404f3945"} Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.177700 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.187973 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-utilities\") pod \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.188099 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-catalog-content\") pod \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.188179 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlpk2\" (UniqueName: \"kubernetes.io/projected/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-kube-api-access-dlpk2\") pod \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\" (UID: \"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb\") " Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.189214 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-utilities" (OuterVolumeSpecName: "utilities") pod "0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" (UID: "0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.193716 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-kube-api-access-dlpk2" (OuterVolumeSpecName: "kube-api-access-dlpk2") pod "0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" (UID: "0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb"). InnerVolumeSpecName "kube-api-access-dlpk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.246404 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" (UID: "0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.289001 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.289070 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.289086 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlpk2\" (UniqueName: \"kubernetes.io/projected/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb-kube-api-access-dlpk2\") on node \"crc\" DevicePath \"\"" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.543134 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn4ds" event={"ID":"0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb","Type":"ContainerDied","Data":"4490bad4523e9bd7ef81f8f3612b9be2b895851e5f84da08f6c979b1e1802cbc"} Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.543182 4743 scope.go:117] "RemoveContainer" containerID="fe8319001efd11721ee220b1bd1296051998974c7065b4df6e6b1b1d404f3945" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.543298 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xn4ds" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.566149 4743 scope.go:117] "RemoveContainer" containerID="d336063c6521c7d276d541cac3b48972e3525e257846cb8afe7f76d993fcf4ca" Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.572616 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xn4ds"] Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.580507 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xn4ds"] Nov 23 00:21:57 crc kubenswrapper[4743]: I1123 00:21:57.593678 4743 scope.go:117] "RemoveContainer" containerID="b342be8ff87776e47e5d09e70e75af9ae8c0e0d8119434ae819da4751114775c" Nov 23 00:21:58 crc kubenswrapper[4743]: I1123 00:21:58.735051 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" path="/var/lib/kubelet/pods/0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb/volumes" Nov 23 00:22:53 crc kubenswrapper[4743]: I1123 00:22:53.690086 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:22:53 crc kubenswrapper[4743]: I1123 00:22:53.690657 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:23:22 crc kubenswrapper[4743]: I1123 00:23:22.173835 4743 generic.go:334] "Generic (PLEG): container finished" podID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerID="c529fb8e585e701ea9152e75704d9461a7a76486b7db354f24000b24a08d9680" exitCode=0 Nov 23 00:23:22 crc kubenswrapper[4743]: I1123 00:23:22.174420 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerDied","Data":"c529fb8e585e701ea9152e75704d9461a7a76486b7db354f24000b24a08d9680"} Nov 23 00:23:22 crc kubenswrapper[4743]: E1123 00:23:22.213882 4743 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3fcbc14_7917_45e5_bf42_02cd0597a890.slice/crio-conmon-c529fb8e585e701ea9152e75704d9461a7a76486b7db354f24000b24a08d9680.scope\": RecentStats: unable to find data in memory cache]" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.464943 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532188 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-blob-cache\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532261 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-node-pullsecrets\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532300 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-proxy-ca-bundles\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532319 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-root\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532349 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-system-configs\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532373 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533002 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532952 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.532406 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvlmc\" (UniqueName: \"kubernetes.io/projected/e3fcbc14-7917-45e5-bf42-02cd0597a890-kube-api-access-fvlmc\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533138 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-run\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533194 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-pull\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533247 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-push\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533317 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildworkdir\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533361 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-ca-bundles\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.533440 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildcachedir\") pod \"e3fcbc14-7917-45e5-bf42-02cd0597a890\" (UID: \"e3fcbc14-7917-45e5-bf42-02cd0597a890\") " Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.534004 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.534035 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.534058 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.534100 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.534183 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.535929 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.538327 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fcbc14-7917-45e5-bf42-02cd0597a890-kube-api-access-fvlmc" (OuterVolumeSpecName: "kube-api-access-fvlmc") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "kube-api-access-fvlmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.538330 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.545684 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.590249 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635040 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvlmc\" (UniqueName: \"kubernetes.io/projected/e3fcbc14-7917-45e5-bf42-02cd0597a890-kube-api-access-fvlmc\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635074 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635083 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635092 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e3fcbc14-7917-45e5-bf42-02cd0597a890-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635103 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635114 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.635123 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3fcbc14-7917-45e5-bf42-02cd0597a890-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.690527 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.690632 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.733543 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:23 crc kubenswrapper[4743]: I1123 00:23:23.736686 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:24 crc kubenswrapper[4743]: I1123 00:23:24.198404 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3fcbc14-7917-45e5-bf42-02cd0597a890","Type":"ContainerDied","Data":"3936b86c6c3248c71e4ded5c97f45646d55b42801dad6c64892c3e7f3a3237e2"} Nov 23 00:23:24 crc kubenswrapper[4743]: I1123 00:23:24.198454 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3936b86c6c3248c71e4ded5c97f45646d55b42801dad6c64892c3e7f3a3237e2" Nov 23 00:23:24 crc kubenswrapper[4743]: I1123 00:23:24.198601 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 23 00:23:25 crc kubenswrapper[4743]: I1123 00:23:25.689664 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e3fcbc14-7917-45e5-bf42-02cd0597a890" (UID: "e3fcbc14-7917-45e5-bf42-02cd0597a890"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:25 crc kubenswrapper[4743]: I1123 00:23:25.764044 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3fcbc14-7917-45e5-bf42-02cd0597a890-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.681761 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 23 00:23:28 crc kubenswrapper[4743]: E1123 00:23:28.682441 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="manage-dockerfile" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682464 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="manage-dockerfile" Nov 23 00:23:28 crc kubenswrapper[4743]: E1123 00:23:28.682483 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="extract-utilities" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682519 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="extract-utilities" Nov 23 00:23:28 crc kubenswrapper[4743]: E1123 00:23:28.682543 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="docker-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682556 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="docker-build" Nov 23 00:23:28 crc kubenswrapper[4743]: E1123 00:23:28.682577 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="git-clone" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682587 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="git-clone" Nov 23 00:23:28 crc kubenswrapper[4743]: E1123 00:23:28.682602 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="extract-content" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682613 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="extract-content" Nov 23 00:23:28 crc kubenswrapper[4743]: E1123 00:23:28.682626 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="registry-server" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682638 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="registry-server" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682802 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9d82da-acbd-47b7-a510-ea8ffc7e0cbb" containerName="registry-server" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.682821 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fcbc14-7917-45e5-bf42-02cd0597a890" containerName="docker-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.683776 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.687684 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.687700 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.688045 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.688749 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.771872 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802637 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802783 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802810 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802831 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802899 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz952\" (UniqueName: \"kubernetes.io/projected/f0f8e45d-42f7-440d-8a47-9a30b2515354-kube-api-access-rz952\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802943 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.802991 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.803050 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.803073 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.803116 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.803142 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.803172 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904628 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904682 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904705 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904720 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz952\" (UniqueName: \"kubernetes.io/projected/f0f8e45d-42f7-440d-8a47-9a30b2515354-kube-api-access-rz952\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904742 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904759 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904800 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904830 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904851 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904873 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904893 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904909 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.904991 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.905024 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.905242 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.905347 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.905665 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.905761 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.905785 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.906117 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.906717 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.914095 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.917809 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:28 crc kubenswrapper[4743]: I1123 00:23:28.923286 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz952\" (UniqueName: \"kubernetes.io/projected/f0f8e45d-42f7-440d-8a47-9a30b2515354-kube-api-access-rz952\") pod \"smart-gateway-operator-1-build\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:29 crc kubenswrapper[4743]: I1123 00:23:29.041141 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:29 crc kubenswrapper[4743]: I1123 00:23:29.294258 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 23 00:23:30 crc kubenswrapper[4743]: I1123 00:23:30.240685 4743 generic.go:334] "Generic (PLEG): container finished" podID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerID="ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561" exitCode=0 Nov 23 00:23:30 crc kubenswrapper[4743]: I1123 00:23:30.240769 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f0f8e45d-42f7-440d-8a47-9a30b2515354","Type":"ContainerDied","Data":"ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561"} Nov 23 00:23:30 crc kubenswrapper[4743]: I1123 00:23:30.241316 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f0f8e45d-42f7-440d-8a47-9a30b2515354","Type":"ContainerStarted","Data":"a241baa6321bd46e1ba2c93b046812760398545a053591c6ed0366628a4b9026"} Nov 23 00:23:31 crc kubenswrapper[4743]: I1123 00:23:31.249780 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f0f8e45d-42f7-440d-8a47-9a30b2515354","Type":"ContainerStarted","Data":"e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98"} Nov 23 00:23:31 crc kubenswrapper[4743]: I1123 00:23:31.276809 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.276790855 podStartE2EDuration="3.276790855s" podCreationTimestamp="2025-11-23 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:23:31.273542375 +0000 UTC m=+1003.351640522" watchObservedRunningTime="2025-11-23 00:23:31.276790855 +0000 UTC m=+1003.354888992" Nov 23 00:23:39 crc kubenswrapper[4743]: I1123 00:23:39.670000 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 23 00:23:39 crc kubenswrapper[4743]: I1123 00:23:39.671151 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerName="docker-build" containerID="cri-o://e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98" gracePeriod=30 Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.294844 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.296819 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.300359 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.300526 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.301964 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.328802 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.392658 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.392719 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.392742 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.392763 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393002 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393142 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393181 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393338 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393374 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393424 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393475 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9rff\" (UniqueName: \"kubernetes.io/projected/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-kube-api-access-j9rff\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.393586 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495007 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495075 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495099 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495140 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495160 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495181 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495206 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9rff\" (UniqueName: \"kubernetes.io/projected/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-kube-api-access-j9rff\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495231 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495254 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495285 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495306 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.495336 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.496101 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.496161 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.496203 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.496605 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.496671 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.496671 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.497037 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.497072 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.497215 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.502030 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.509459 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.515982 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9rff\" (UniqueName: \"kubernetes.io/projected/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-kube-api-access-j9rff\") pod \"smart-gateway-operator-2-build\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.612382 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:23:41 crc kubenswrapper[4743]: I1123 00:23:41.877930 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 23 00:23:42 crc kubenswrapper[4743]: I1123 00:23:42.326443 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerStarted","Data":"3cffb6af3ad4599c0551988807747f88d847d161da209037e0f5cac425d794d6"} Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.104972 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_f0f8e45d-42f7-440d-8a47-9a30b2515354/docker-build/0.log" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.105870 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245206 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-blob-cache\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245328 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-root\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245369 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-run\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245410 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-proxy-ca-bundles\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245454 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-ca-bundles\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245501 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildcachedir\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245549 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-pull\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245585 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-push\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245618 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz952\" (UniqueName: \"kubernetes.io/projected/f0f8e45d-42f7-440d-8a47-9a30b2515354-kube-api-access-rz952\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245615 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245672 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-system-configs\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245717 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildworkdir\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.245746 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-node-pullsecrets\") pod \"f0f8e45d-42f7-440d-8a47-9a30b2515354\" (UID: \"f0f8e45d-42f7-440d-8a47-9a30b2515354\") " Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.246073 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.246133 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.246543 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.246662 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.246697 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.247998 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.248151 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.252850 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.253081 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f8e45d-42f7-440d-8a47-9a30b2515354-kube-api-access-rz952" (OuterVolumeSpecName: "kube-api-access-rz952") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "kube-api-access-rz952". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.253781 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.340337 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_f0f8e45d-42f7-440d-8a47-9a30b2515354/docker-build/0.log" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.340904 4743 generic.go:334] "Generic (PLEG): container finished" podID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerID="e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98" exitCode=1 Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.340983 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f0f8e45d-42f7-440d-8a47-9a30b2515354","Type":"ContainerDied","Data":"e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98"} Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.341155 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"f0f8e45d-42f7-440d-8a47-9a30b2515354","Type":"ContainerDied","Data":"a241baa6321bd46e1ba2c93b046812760398545a053591c6ed0366628a4b9026"} Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.340993 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.341201 4743 scope.go:117] "RemoveContainer" containerID="e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.343228 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerStarted","Data":"3e2e3a297b4870c6fc5d80e282de3f066c0903c36bb26b896b3fe1285694f69a"} Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347023 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347129 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347198 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0f8e45d-42f7-440d-8a47-9a30b2515354-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347259 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347311 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347361 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347412 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347471 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f0f8e45d-42f7-440d-8a47-9a30b2515354-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.347696 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz952\" (UniqueName: \"kubernetes.io/projected/f0f8e45d-42f7-440d-8a47-9a30b2515354-kube-api-access-rz952\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.417376 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.433728 4743 scope.go:117] "RemoveContainer" containerID="ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.449771 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.457877 4743 scope.go:117] "RemoveContainer" containerID="e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98" Nov 23 00:23:44 crc kubenswrapper[4743]: E1123 00:23:44.458349 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98\": container with ID starting with e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98 not found: ID does not exist" containerID="e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.458392 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98"} err="failed to get container status \"e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98\": rpc error: code = NotFound desc = could not find container \"e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98\": container with ID starting with e5f368339367b9a61b4650b0df43f6d63cc60b2f1c7c806ea4345faf56b16e98 not found: ID does not exist" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.458422 4743 scope.go:117] "RemoveContainer" containerID="ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561" Nov 23 00:23:44 crc kubenswrapper[4743]: E1123 00:23:44.459366 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561\": container with ID starting with ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561 not found: ID does not exist" containerID="ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.459425 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561"} err="failed to get container status \"ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561\": rpc error: code = NotFound desc = could not find container \"ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561\": container with ID starting with ff4bf6f2df43f3ed91a1604d7eb46e4c1beb90299ec9b669c041cb191507b561 not found: ID does not exist" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.734722 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f0f8e45d-42f7-440d-8a47-9a30b2515354" (UID: "f0f8e45d-42f7-440d-8a47-9a30b2515354"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.758180 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f0f8e45d-42f7-440d-8a47-9a30b2515354-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.978028 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 23 00:23:44 crc kubenswrapper[4743]: I1123 00:23:44.981663 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 23 00:23:45 crc kubenswrapper[4743]: I1123 00:23:45.350717 4743 generic.go:334] "Generic (PLEG): container finished" podID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerID="3e2e3a297b4870c6fc5d80e282de3f066c0903c36bb26b896b3fe1285694f69a" exitCode=0 Nov 23 00:23:45 crc kubenswrapper[4743]: I1123 00:23:45.350812 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerDied","Data":"3e2e3a297b4870c6fc5d80e282de3f066c0903c36bb26b896b3fe1285694f69a"} Nov 23 00:23:46 crc kubenswrapper[4743]: I1123 00:23:46.364252 4743 generic.go:334] "Generic (PLEG): container finished" podID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerID="464e82349a6300e93971837d6262495d1013e7ba9337e4b28342c66f2ea05678" exitCode=0 Nov 23 00:23:46 crc kubenswrapper[4743]: I1123 00:23:46.364680 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerDied","Data":"464e82349a6300e93971837d6262495d1013e7ba9337e4b28342c66f2ea05678"} Nov 23 00:23:46 crc kubenswrapper[4743]: I1123 00:23:46.404717 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_770bb7ce-c40e-45d6-b1d3-75e3bad3d646/manage-dockerfile/0.log" Nov 23 00:23:46 crc kubenswrapper[4743]: I1123 00:23:46.734378 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" path="/var/lib/kubelet/pods/f0f8e45d-42f7-440d-8a47-9a30b2515354/volumes" Nov 23 00:23:47 crc kubenswrapper[4743]: I1123 00:23:47.374127 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerStarted","Data":"b390b53efb45fec0af032552d63afc02216f48d15a13fc7085831ab06b289227"} Nov 23 00:23:47 crc kubenswrapper[4743]: I1123 00:23:47.405898 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=6.405878359 podStartE2EDuration="6.405878359s" podCreationTimestamp="2025-11-23 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:23:47.399915712 +0000 UTC m=+1019.478013859" watchObservedRunningTime="2025-11-23 00:23:47.405878359 +0000 UTC m=+1019.483976496" Nov 23 00:23:53 crc kubenswrapper[4743]: I1123 00:23:53.690872 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:23:53 crc kubenswrapper[4743]: I1123 00:23:53.692109 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:23:53 crc kubenswrapper[4743]: I1123 00:23:53.692194 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:23:53 crc kubenswrapper[4743]: I1123 00:23:53.693091 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"016ef81e58c130b632f512df7f81288af04b6c5a10c9f9bb3144f24506f08545"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:23:53 crc kubenswrapper[4743]: I1123 00:23:53.693210 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://016ef81e58c130b632f512df7f81288af04b6c5a10c9f9bb3144f24506f08545" gracePeriod=600 Nov 23 00:23:56 crc kubenswrapper[4743]: I1123 00:23:56.435384 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="016ef81e58c130b632f512df7f81288af04b6c5a10c9f9bb3144f24506f08545" exitCode=0 Nov 23 00:23:56 crc kubenswrapper[4743]: I1123 00:23:56.435554 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"016ef81e58c130b632f512df7f81288af04b6c5a10c9f9bb3144f24506f08545"} Nov 23 00:23:56 crc kubenswrapper[4743]: I1123 00:23:56.435783 4743 scope.go:117] "RemoveContainer" containerID="052e275822d2ee2fb1b2c9a5a7391cecc2e3d47d664aaa005c530fa35f4013d9" Nov 23 00:24:00 crc kubenswrapper[4743]: I1123 00:24:00.471852 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"2a423b2693fed3abc2a8a9390dc3769db28cfe3fcfc6b65f5803f65e8c4773a0"} Nov 23 00:25:22 crc kubenswrapper[4743]: I1123 00:25:22.214398 4743 generic.go:334] "Generic (PLEG): container finished" podID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerID="b390b53efb45fec0af032552d63afc02216f48d15a13fc7085831ab06b289227" exitCode=0 Nov 23 00:25:22 crc kubenswrapper[4743]: I1123 00:25:22.214548 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerDied","Data":"b390b53efb45fec0af032552d63afc02216f48d15a13fc7085831ab06b289227"} Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.462009 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.591715 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9rff\" (UniqueName: \"kubernetes.io/projected/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-kube-api-access-j9rff\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.591833 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-node-pullsecrets\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.591882 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-system-configs\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.591993 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592049 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-blob-cache\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592124 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildworkdir\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592197 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-root\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592318 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-ca-bundles\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592408 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-pull\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592462 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-run\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592548 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-proxy-ca-bundles\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592587 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-push\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592627 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildcachedir\") pod \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\" (UID: \"770bb7ce-c40e-45d6-b1d3-75e3bad3d646\") " Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.592828 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593237 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593394 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593425 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593448 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593579 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593596 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.593742 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.597011 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.600302 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.604647 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.605678 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-kube-api-access-j9rff" (OuterVolumeSpecName: "kube-api-access-j9rff") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "kube-api-access-j9rff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694159 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9rff\" (UniqueName: \"kubernetes.io/projected/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-kube-api-access-j9rff\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694198 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694213 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694225 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694237 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694249 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.694260 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.793915 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:23 crc kubenswrapper[4743]: I1123 00:25:23.794836 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:24 crc kubenswrapper[4743]: I1123 00:25:24.227854 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"770bb7ce-c40e-45d6-b1d3-75e3bad3d646","Type":"ContainerDied","Data":"3cffb6af3ad4599c0551988807747f88d847d161da209037e0f5cac425d794d6"} Nov 23 00:25:24 crc kubenswrapper[4743]: I1123 00:25:24.227902 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cffb6af3ad4599c0551988807747f88d847d161da209037e0f5cac425d794d6" Nov 23 00:25:24 crc kubenswrapper[4743]: I1123 00:25:24.227941 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 23 00:25:25 crc kubenswrapper[4743]: I1123 00:25:25.716055 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "770bb7ce-c40e-45d6-b1d3-75e3bad3d646" (UID: "770bb7ce-c40e-45d6-b1d3-75e3bad3d646"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:25 crc kubenswrapper[4743]: I1123 00:25:25.719766 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/770bb7ce-c40e-45d6-b1d3-75e3bad3d646-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.347647 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 23 00:25:28 crc kubenswrapper[4743]: E1123 00:25:28.348236 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerName="docker-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348249 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerName="docker-build" Nov 23 00:25:28 crc kubenswrapper[4743]: E1123 00:25:28.348257 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="manage-dockerfile" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348263 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="manage-dockerfile" Nov 23 00:25:28 crc kubenswrapper[4743]: E1123 00:25:28.348278 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerName="manage-dockerfile" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348284 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerName="manage-dockerfile" Nov 23 00:25:28 crc kubenswrapper[4743]: E1123 00:25:28.348298 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="docker-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348304 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="docker-build" Nov 23 00:25:28 crc kubenswrapper[4743]: E1123 00:25:28.348312 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="git-clone" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348318 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="git-clone" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348410 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="770bb7ce-c40e-45d6-b1d3-75e3bad3d646" containerName="docker-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.348424 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f8e45d-42f7-440d-8a47-9a30b2515354" containerName="docker-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.349162 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.351751 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.351909 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.352012 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.356083 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.372181 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.459287 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npn87\" (UniqueName: \"kubernetes.io/projected/69271230-9944-404e-8ce2-4fee0b8c6e52-kube-api-access-npn87\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.459970 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.460109 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.460266 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-system-configs\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.460418 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-buildworkdir\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.460588 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.460731 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-root\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.460860 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-pull\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.461002 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-run\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.461116 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-push\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.461248 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.461355 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-buildcachedir\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562151 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562199 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-system-configs\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562220 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-buildworkdir\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562247 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562262 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-root\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562285 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-pull\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562313 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-run\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562327 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-push\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562351 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562365 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-buildcachedir\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562387 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npn87\" (UniqueName: \"kubernetes.io/projected/69271230-9944-404e-8ce2-4fee0b8c6e52-kube-api-access-npn87\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562417 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562856 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-root\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562872 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-buildcachedir\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.562990 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.563222 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-system-configs\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.563599 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.563685 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.563960 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-buildworkdir\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.563982 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.832538 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-run\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.833038 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-push\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.833694 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-pull\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.836304 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npn87\" (UniqueName: \"kubernetes.io/projected/69271230-9944-404e-8ce2-4fee0b8c6e52-kube-api-access-npn87\") pod \"sg-core-1-build\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " pod="service-telemetry/sg-core-1-build" Nov 23 00:25:28 crc kubenswrapper[4743]: I1123 00:25:28.972909 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 23 00:25:29 crc kubenswrapper[4743]: I1123 00:25:29.421603 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 23 00:25:29 crc kubenswrapper[4743]: W1123 00:25:29.433404 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69271230_9944_404e_8ce2_4fee0b8c6e52.slice/crio-1cca0faf6259832df2af6184fe1c200003e1196cb36694063c09234c8eca70d5 WatchSource:0}: Error finding container 1cca0faf6259832df2af6184fe1c200003e1196cb36694063c09234c8eca70d5: Status 404 returned error can't find the container with id 1cca0faf6259832df2af6184fe1c200003e1196cb36694063c09234c8eca70d5 Nov 23 00:25:30 crc kubenswrapper[4743]: I1123 00:25:30.276972 4743 generic.go:334] "Generic (PLEG): container finished" podID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerID="46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d" exitCode=0 Nov 23 00:25:30 crc kubenswrapper[4743]: I1123 00:25:30.277055 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69271230-9944-404e-8ce2-4fee0b8c6e52","Type":"ContainerDied","Data":"46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d"} Nov 23 00:25:30 crc kubenswrapper[4743]: I1123 00:25:30.277098 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69271230-9944-404e-8ce2-4fee0b8c6e52","Type":"ContainerStarted","Data":"1cca0faf6259832df2af6184fe1c200003e1196cb36694063c09234c8eca70d5"} Nov 23 00:25:31 crc kubenswrapper[4743]: I1123 00:25:31.298991 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69271230-9944-404e-8ce2-4fee0b8c6e52","Type":"ContainerStarted","Data":"deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1"} Nov 23 00:25:31 crc kubenswrapper[4743]: I1123 00:25:31.338455 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.338419938 podStartE2EDuration="3.338419938s" podCreationTimestamp="2025-11-23 00:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:25:31.337831113 +0000 UTC m=+1123.415929280" watchObservedRunningTime="2025-11-23 00:25:31.338419938 +0000 UTC m=+1123.416518095" Nov 23 00:25:38 crc kubenswrapper[4743]: I1123 00:25:38.689653 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 23 00:25:38 crc kubenswrapper[4743]: I1123 00:25:38.690640 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerName="docker-build" containerID="cri-o://deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1" gracePeriod=30 Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.200952 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_69271230-9944-404e-8ce2-4fee0b8c6e52/docker-build/0.log" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.202158 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316260 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-system-configs\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316347 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-run\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316393 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-buildcachedir\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316448 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npn87\" (UniqueName: \"kubernetes.io/projected/69271230-9944-404e-8ce2-4fee0b8c6e52-kube-api-access-npn87\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316465 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-pull\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316498 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-push\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316528 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-root\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316566 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-proxy-ca-bundles\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316601 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-build-blob-cache\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316626 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-buildworkdir\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316631 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316652 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-node-pullsecrets\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316689 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.316753 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-ca-bundles\") pod \"69271230-9944-404e-8ce2-4fee0b8c6e52\" (UID: \"69271230-9944-404e-8ce2-4fee0b8c6e52\") " Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.317867 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.317884 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.317920 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.318127 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.318235 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.318249 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.318259 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69271230-9944-404e-8ce2-4fee0b8c6e52-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.318268 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.318299 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69271230-9944-404e-8ce2-4fee0b8c6e52-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.319320 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.322593 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.323245 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.324696 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69271230-9944-404e-8ce2-4fee0b8c6e52-kube-api-access-npn87" (OuterVolumeSpecName: "kube-api-access-npn87") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "kube-api-access-npn87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.355076 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_69271230-9944-404e-8ce2-4fee0b8c6e52/docker-build/0.log" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.355670 4743 generic.go:334] "Generic (PLEG): container finished" podID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerID="deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1" exitCode=1 Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.355723 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69271230-9944-404e-8ce2-4fee0b8c6e52","Type":"ContainerDied","Data":"deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1"} Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.355759 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69271230-9944-404e-8ce2-4fee0b8c6e52","Type":"ContainerDied","Data":"1cca0faf6259832df2af6184fe1c200003e1196cb36694063c09234c8eca70d5"} Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.355778 4743 scope.go:117] "RemoveContainer" containerID="deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.355956 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.392601 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 23 00:25:40 crc kubenswrapper[4743]: E1123 00:25:40.392874 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerName="docker-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.392886 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerName="docker-build" Nov 23 00:25:40 crc kubenswrapper[4743]: E1123 00:25:40.392926 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerName="manage-dockerfile" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.392935 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerName="manage-dockerfile" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.393066 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" containerName="docker-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.394161 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.398932 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.399018 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.399141 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.412611 4743 scope.go:117] "RemoveContainer" containerID="46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.414408 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.419325 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npn87\" (UniqueName: \"kubernetes.io/projected/69271230-9944-404e-8ce2-4fee0b8c6e52-kube-api-access-npn87\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.419499 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.419594 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/69271230-9944-404e-8ce2-4fee0b8c6e52-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.419682 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.419765 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.437762 4743 scope.go:117] "RemoveContainer" containerID="deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1" Nov 23 00:25:40 crc kubenswrapper[4743]: E1123 00:25:40.442678 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1\": container with ID starting with deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1 not found: ID does not exist" containerID="deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.442726 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1"} err="failed to get container status \"deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1\": rpc error: code = NotFound desc = could not find container \"deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1\": container with ID starting with deca46f030c851daef322784e8c56e7372db1812493e294036535cca77cf79f1 not found: ID does not exist" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.442752 4743 scope.go:117] "RemoveContainer" containerID="46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d" Nov 23 00:25:40 crc kubenswrapper[4743]: E1123 00:25:40.443107 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d\": container with ID starting with 46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d not found: ID does not exist" containerID="46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.443128 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d"} err="failed to get container status \"46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d\": rpc error: code = NotFound desc = could not find container \"46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d\": container with ID starting with 46d0c2e8ded75dae14156682c7b9c922f3541ac3969c92e2dc280bbdf623f18d not found: ID does not exist" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.453879 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.461253 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "69271230-9944-404e-8ce2-4fee0b8c6e52" (UID: "69271230-9944-404e-8ce2-4fee0b8c6e52"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520409 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-push\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520452 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-pull\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520496 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520519 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-run\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520543 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520562 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildworkdir\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520582 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520601 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-root\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520617 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildcachedir\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520816 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-system-configs\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520883 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzzc\" (UniqueName: \"kubernetes.io/projected/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-kube-api-access-6zzzc\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.520918 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.521006 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.521024 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69271230-9944-404e-8ce2-4fee0b8c6e52-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622158 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622204 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-run\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622230 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622247 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildworkdir\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622273 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622296 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-root\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622312 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildcachedir\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622345 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-system-configs\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622366 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zzzc\" (UniqueName: \"kubernetes.io/projected/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-kube-api-access-6zzzc\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622388 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622411 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-push\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622430 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-pull\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.622854 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623039 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-root\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623047 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildcachedir\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623062 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildworkdir\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623267 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-run\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623407 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623452 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-system-configs\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.623993 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.624226 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.625833 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-pull\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.627980 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-push\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.650505 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zzzc\" (UniqueName: \"kubernetes.io/projected/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-kube-api-access-6zzzc\") pod \"sg-core-2-build\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.688262 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.692694 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.713345 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.736773 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69271230-9944-404e-8ce2-4fee0b8c6e52" path="/var/lib/kubelet/pods/69271230-9944-404e-8ce2-4fee0b8c6e52/volumes" Nov 23 00:25:40 crc kubenswrapper[4743]: I1123 00:25:40.901605 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 23 00:25:41 crc kubenswrapper[4743]: I1123 00:25:41.361559 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerStarted","Data":"bfd6b07e3ca18584777d604d1f9406e18641769ff91f2aeca5e731cf7756c46c"} Nov 23 00:25:41 crc kubenswrapper[4743]: I1123 00:25:41.361630 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerStarted","Data":"59e0cbb9348d01e074b2415c5c3c7860de3029c7506992df39f5a16bfd878432"} Nov 23 00:25:42 crc kubenswrapper[4743]: I1123 00:25:42.372125 4743 generic.go:334] "Generic (PLEG): container finished" podID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerID="bfd6b07e3ca18584777d604d1f9406e18641769ff91f2aeca5e731cf7756c46c" exitCode=0 Nov 23 00:25:42 crc kubenswrapper[4743]: I1123 00:25:42.372346 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerDied","Data":"bfd6b07e3ca18584777d604d1f9406e18641769ff91f2aeca5e731cf7756c46c"} Nov 23 00:25:43 crc kubenswrapper[4743]: I1123 00:25:43.379886 4743 generic.go:334] "Generic (PLEG): container finished" podID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerID="504cdd47c0af0fb233ce611f070a14293ef2d0046a57b81e1abc254f5097ec63" exitCode=0 Nov 23 00:25:43 crc kubenswrapper[4743]: I1123 00:25:43.379978 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerDied","Data":"504cdd47c0af0fb233ce611f070a14293ef2d0046a57b81e1abc254f5097ec63"} Nov 23 00:25:43 crc kubenswrapper[4743]: I1123 00:25:43.426157 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1/manage-dockerfile/0.log" Nov 23 00:25:44 crc kubenswrapper[4743]: I1123 00:25:44.389152 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerStarted","Data":"63f4393c723107f510c2536067c32d23ae2ddbf688941e64e9b2638cebe128b2"} Nov 23 00:25:44 crc kubenswrapper[4743]: I1123 00:25:44.414094 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.414077828 podStartE2EDuration="4.414077828s" podCreationTimestamp="2025-11-23 00:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:25:44.411416682 +0000 UTC m=+1136.489514819" watchObservedRunningTime="2025-11-23 00:25:44.414077828 +0000 UTC m=+1136.492175955" Nov 23 00:26:23 crc kubenswrapper[4743]: I1123 00:26:23.690818 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:26:23 crc kubenswrapper[4743]: I1123 00:26:23.691400 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:26:53 crc kubenswrapper[4743]: I1123 00:26:53.690636 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:26:53 crc kubenswrapper[4743]: I1123 00:26:53.691357 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:27:23 crc kubenswrapper[4743]: I1123 00:27:23.690231 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:27:23 crc kubenswrapper[4743]: I1123 00:27:23.691683 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:27:23 crc kubenswrapper[4743]: I1123 00:27:23.691765 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:27:23 crc kubenswrapper[4743]: I1123 00:27:23.692477 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a423b2693fed3abc2a8a9390dc3769db28cfe3fcfc6b65f5803f65e8c4773a0"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:27:23 crc kubenswrapper[4743]: I1123 00:27:23.692568 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://2a423b2693fed3abc2a8a9390dc3769db28cfe3fcfc6b65f5803f65e8c4773a0" gracePeriod=600 Nov 23 00:27:24 crc kubenswrapper[4743]: I1123 00:27:24.210555 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="2a423b2693fed3abc2a8a9390dc3769db28cfe3fcfc6b65f5803f65e8c4773a0" exitCode=0 Nov 23 00:27:24 crc kubenswrapper[4743]: I1123 00:27:24.210670 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"2a423b2693fed3abc2a8a9390dc3769db28cfe3fcfc6b65f5803f65e8c4773a0"} Nov 23 00:27:24 crc kubenswrapper[4743]: I1123 00:27:24.210884 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"9dcd1bca2c6fe5058dfd781e9013fb345a66f90c8bac7c0726c518f47bebe38e"} Nov 23 00:27:24 crc kubenswrapper[4743]: I1123 00:27:24.210909 4743 scope.go:117] "RemoveContainer" containerID="016ef81e58c130b632f512df7f81288af04b6c5a10c9f9bb3144f24506f08545" Nov 23 00:28:47 crc kubenswrapper[4743]: I1123 00:28:47.933523 4743 generic.go:334] "Generic (PLEG): container finished" podID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerID="63f4393c723107f510c2536067c32d23ae2ddbf688941e64e9b2638cebe128b2" exitCode=0 Nov 23 00:28:47 crc kubenswrapper[4743]: I1123 00:28:47.933582 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerDied","Data":"63f4393c723107f510c2536067c32d23ae2ddbf688941e64e9b2638cebe128b2"} Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.182537 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254286 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildworkdir\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254329 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-node-pullsecrets\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254365 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-run\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254387 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-blob-cache\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254418 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-pull\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254435 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zzzc\" (UniqueName: \"kubernetes.io/projected/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-kube-api-access-6zzzc\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254456 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-push\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254476 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-proxy-ca-bundles\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254514 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-root\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254461 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.254570 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-system-configs\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255508 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-ca-bundles\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255549 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildcachedir\") pod \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\" (UID: \"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1\") " Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255380 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255399 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255818 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255843 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255855 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.255793 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.256091 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.256293 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.260269 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.267854 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-kube-api-access-6zzzc" (OuterVolumeSpecName: "kube-api-access-6zzzc") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "kube-api-access-6zzzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.267871 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.268918 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356620 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356651 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zzzc\" (UniqueName: \"kubernetes.io/projected/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-kube-api-access-6zzzc\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356664 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356678 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356690 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356703 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.356715 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.538299 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.559880 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.956141 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1","Type":"ContainerDied","Data":"59e0cbb9348d01e074b2415c5c3c7860de3029c7506992df39f5a16bfd878432"} Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.956189 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e0cbb9348d01e074b2415c5c3c7860de3029c7506992df39f5a16bfd878432" Nov 23 00:28:49 crc kubenswrapper[4743]: I1123 00:28:49.956288 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.205053 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 23 00:28:54 crc kubenswrapper[4743]: E1123 00:28:54.205862 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="git-clone" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.205876 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="git-clone" Nov 23 00:28:54 crc kubenswrapper[4743]: E1123 00:28:54.205893 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="manage-dockerfile" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.205899 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="manage-dockerfile" Nov 23 00:28:54 crc kubenswrapper[4743]: E1123 00:28:54.205911 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="docker-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.205917 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="docker-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.206034 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" containerName="docker-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.206681 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.208693 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.209467 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.210766 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.227937 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340055 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340139 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340333 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340502 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340568 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-push\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340597 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340689 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4tz6\" (UniqueName: \"kubernetes.io/projected/e183b070-de7d-4036-9683-8dc4cda5120e-kube-api-access-w4tz6\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340753 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340798 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340894 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-pull\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.340955 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.341100 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.441914 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.441979 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442012 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442028 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442050 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442071 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442092 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-push\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442110 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442105 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442128 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4tz6\" (UniqueName: \"kubernetes.io/projected/e183b070-de7d-4036-9683-8dc4cda5120e-kube-api-access-w4tz6\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442207 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442239 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442433 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442730 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442778 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442971 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.443027 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.443175 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.443326 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.443403 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.442269 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-pull\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.446804 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-push\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.447278 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-pull\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.459812 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4tz6\" (UniqueName: \"kubernetes.io/projected/e183b070-de7d-4036-9683-8dc4cda5120e-kube-api-access-w4tz6\") pod \"sg-bridge-1-build\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.524927 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.705719 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 23 00:28:54 crc kubenswrapper[4743]: I1123 00:28:54.997835 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e183b070-de7d-4036-9683-8dc4cda5120e","Type":"ContainerStarted","Data":"1d477136c323d5678f5698f310182b355e722a347116881c2e82c8df646d954f"} Nov 23 00:28:56 crc kubenswrapper[4743]: I1123 00:28:56.632733 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1" (UID: "f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:28:56 crc kubenswrapper[4743]: I1123 00:28:56.675387 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f75d5b19-4c6e-49e0-9eb0-8bdc9c319bf1-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:28:57 crc kubenswrapper[4743]: I1123 00:28:57.013710 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e183b070-de7d-4036-9683-8dc4cda5120e","Type":"ContainerStarted","Data":"f288dcf4159974ef091dcfbeab4cec679da24ac7b07f42ed7fb88cea3fdb7072"} Nov 23 00:28:58 crc kubenswrapper[4743]: I1123 00:28:58.020618 4743 generic.go:334] "Generic (PLEG): container finished" podID="e183b070-de7d-4036-9683-8dc4cda5120e" containerID="f288dcf4159974ef091dcfbeab4cec679da24ac7b07f42ed7fb88cea3fdb7072" exitCode=0 Nov 23 00:28:58 crc kubenswrapper[4743]: I1123 00:28:58.020678 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e183b070-de7d-4036-9683-8dc4cda5120e","Type":"ContainerDied","Data":"f288dcf4159974ef091dcfbeab4cec679da24ac7b07f42ed7fb88cea3fdb7072"} Nov 23 00:28:59 crc kubenswrapper[4743]: I1123 00:28:59.031891 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e183b070-de7d-4036-9683-8dc4cda5120e","Type":"ContainerStarted","Data":"08933d38b4c31c834938429f8e6daa731f97f8a5c1df6341306e04d95fde05b8"} Nov 23 00:28:59 crc kubenswrapper[4743]: I1123 00:28:59.069239 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=5.069217639 podStartE2EDuration="5.069217639s" podCreationTimestamp="2025-11-23 00:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:28:59.066864021 +0000 UTC m=+1331.144962168" watchObservedRunningTime="2025-11-23 00:28:59.069217639 +0000 UTC m=+1331.147315766" Nov 23 00:29:04 crc kubenswrapper[4743]: I1123 00:29:04.649226 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 23 00:29:04 crc kubenswrapper[4743]: I1123 00:29:04.649753 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" containerName="docker-build" containerID="cri-o://08933d38b4c31c834938429f8e6daa731f97f8a5c1df6341306e04d95fde05b8" gracePeriod=30 Nov 23 00:29:05 crc kubenswrapper[4743]: I1123 00:29:05.065474 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e183b070-de7d-4036-9683-8dc4cda5120e/docker-build/0.log" Nov 23 00:29:05 crc kubenswrapper[4743]: I1123 00:29:05.066050 4743 generic.go:334] "Generic (PLEG): container finished" podID="e183b070-de7d-4036-9683-8dc4cda5120e" containerID="08933d38b4c31c834938429f8e6daa731f97f8a5c1df6341306e04d95fde05b8" exitCode=1 Nov 23 00:29:05 crc kubenswrapper[4743]: I1123 00:29:05.066087 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e183b070-de7d-4036-9683-8dc4cda5120e","Type":"ContainerDied","Data":"08933d38b4c31c834938429f8e6daa731f97f8a5c1df6341306e04d95fde05b8"} Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.131426 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e183b070-de7d-4036-9683-8dc4cda5120e/docker-build/0.log" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.132041 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205073 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-proxy-ca-bundles\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205146 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-buildcachedir\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205174 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-root\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205203 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-node-pullsecrets\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205245 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205304 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-system-configs\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205369 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205391 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-run\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205419 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-build-blob-cache\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205451 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4tz6\" (UniqueName: \"kubernetes.io/projected/e183b070-de7d-4036-9683-8dc4cda5120e-kube-api-access-w4tz6\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205470 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-ca-bundles\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205515 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-pull\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205542 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-buildworkdir\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205569 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-push\") pod \"e183b070-de7d-4036-9683-8dc4cda5120e\" (UID: \"e183b070-de7d-4036-9683-8dc4cda5120e\") " Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205890 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.205908 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206032 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206046 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206056 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e183b070-de7d-4036-9683-8dc4cda5120e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206064 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206135 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206385 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.206497 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.214725 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.214745 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e183b070-de7d-4036-9683-8dc4cda5120e-kube-api-access-w4tz6" (OuterVolumeSpecName: "kube-api-access-w4tz6") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "kube-api-access-w4tz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.214826 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.307720 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.307756 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4tz6\" (UniqueName: \"kubernetes.io/projected/e183b070-de7d-4036-9683-8dc4cda5120e-kube-api-access-w4tz6\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.307769 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e183b070-de7d-4036-9683-8dc4cda5120e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.307781 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.307793 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.307805 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/e183b070-de7d-4036-9683-8dc4cda5120e-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.532999 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.611895 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.773555 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 23 00:29:06 crc kubenswrapper[4743]: E1123 00:29:06.774101 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" containerName="manage-dockerfile" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.774186 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" containerName="manage-dockerfile" Nov 23 00:29:06 crc kubenswrapper[4743]: E1123 00:29:06.774284 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" containerName="docker-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.774309 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" containerName="docker-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.775393 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" containerName="docker-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.783257 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.789102 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.789419 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.789581 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.796408 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.852213 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e183b070-de7d-4036-9683-8dc4cda5120e" (UID: "e183b070-de7d-4036-9683-8dc4cda5120e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921540 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921612 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921684 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921717 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921845 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921883 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-pull\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921913 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-push\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.921939 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.922078 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.922134 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wh7\" (UniqueName: \"kubernetes.io/projected/9e752650-5b99-4eb3-b31e-5960c60831ea-kube-api-access-b8wh7\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.922247 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.922313 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:06 crc kubenswrapper[4743]: I1123 00:29:06.922451 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e183b070-de7d-4036-9683-8dc4cda5120e-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024220 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024285 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024336 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024354 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-pull\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024377 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-push\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024396 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024416 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024421 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024435 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wh7\" (UniqueName: \"kubernetes.io/projected/9e752650-5b99-4eb3-b31e-5960c60831ea-kube-api-access-b8wh7\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024624 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024679 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024729 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.024754 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.025317 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.025439 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.025742 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.025794 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.026066 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.026197 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.026589 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.026626 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.030261 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-push\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.030815 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-pull\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.052339 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wh7\" (UniqueName: \"kubernetes.io/projected/9e752650-5b99-4eb3-b31e-5960c60831ea-kube-api-access-b8wh7\") pod \"sg-bridge-2-build\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.080229 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e183b070-de7d-4036-9683-8dc4cda5120e/docker-build/0.log" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.080929 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e183b070-de7d-4036-9683-8dc4cda5120e","Type":"ContainerDied","Data":"1d477136c323d5678f5698f310182b355e722a347116881c2e82c8df646d954f"} Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.081020 4743 scope.go:117] "RemoveContainer" containerID="08933d38b4c31c834938429f8e6daa731f97f8a5c1df6341306e04d95fde05b8" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.081085 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.111426 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.129867 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.136162 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.150982 4743 scope.go:117] "RemoveContainer" containerID="f288dcf4159974ef091dcfbeab4cec679da24ac7b07f42ed7fb88cea3fdb7072" Nov 23 00:29:07 crc kubenswrapper[4743]: I1123 00:29:07.556511 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 23 00:29:07 crc kubenswrapper[4743]: W1123 00:29:07.567709 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e752650_5b99_4eb3_b31e_5960c60831ea.slice/crio-432ff13da1b4eff5907b8701977e23df46226e48033d653dcbf613f34eedbcba WatchSource:0}: Error finding container 432ff13da1b4eff5907b8701977e23df46226e48033d653dcbf613f34eedbcba: Status 404 returned error can't find the container with id 432ff13da1b4eff5907b8701977e23df46226e48033d653dcbf613f34eedbcba Nov 23 00:29:08 crc kubenswrapper[4743]: I1123 00:29:08.093273 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerStarted","Data":"4b9512a3ab8f7b935df18f7a4cc84c875b70923e40c4ed44e11557d2c14f9c0e"} Nov 23 00:29:08 crc kubenswrapper[4743]: I1123 00:29:08.093315 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerStarted","Data":"432ff13da1b4eff5907b8701977e23df46226e48033d653dcbf613f34eedbcba"} Nov 23 00:29:08 crc kubenswrapper[4743]: I1123 00:29:08.729982 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e183b070-de7d-4036-9683-8dc4cda5120e" path="/var/lib/kubelet/pods/e183b070-de7d-4036-9683-8dc4cda5120e/volumes" Nov 23 00:29:09 crc kubenswrapper[4743]: I1123 00:29:09.098977 4743 generic.go:334] "Generic (PLEG): container finished" podID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerID="4b9512a3ab8f7b935df18f7a4cc84c875b70923e40c4ed44e11557d2c14f9c0e" exitCode=0 Nov 23 00:29:09 crc kubenswrapper[4743]: I1123 00:29:09.099017 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerDied","Data":"4b9512a3ab8f7b935df18f7a4cc84c875b70923e40c4ed44e11557d2c14f9c0e"} Nov 23 00:29:10 crc kubenswrapper[4743]: I1123 00:29:10.106073 4743 generic.go:334] "Generic (PLEG): container finished" podID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerID="2b3d8b77eefd3a1072e1fbc1563c80b1a7ba087b403ce0181774deadc0e7adcf" exitCode=0 Nov 23 00:29:10 crc kubenswrapper[4743]: I1123 00:29:10.106126 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerDied","Data":"2b3d8b77eefd3a1072e1fbc1563c80b1a7ba087b403ce0181774deadc0e7adcf"} Nov 23 00:29:10 crc kubenswrapper[4743]: I1123 00:29:10.142924 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_9e752650-5b99-4eb3-b31e-5960c60831ea/manage-dockerfile/0.log" Nov 23 00:29:11 crc kubenswrapper[4743]: I1123 00:29:11.114692 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerStarted","Data":"62c11b33fe360b8751cc7503efeedfc52352f5ee855a0428fc2999644527b597"} Nov 23 00:29:11 crc kubenswrapper[4743]: I1123 00:29:11.141110 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.141094215 podStartE2EDuration="5.141094215s" podCreationTimestamp="2025-11-23 00:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:29:11.137946788 +0000 UTC m=+1343.216044925" watchObservedRunningTime="2025-11-23 00:29:11.141094215 +0000 UTC m=+1343.219192342" Nov 23 00:29:23 crc kubenswrapper[4743]: I1123 00:29:23.690130 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:29:23 crc kubenswrapper[4743]: I1123 00:29:23.690685 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:29:53 crc kubenswrapper[4743]: I1123 00:29:53.690379 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:29:53 crc kubenswrapper[4743]: I1123 00:29:53.691451 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.136084 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t"] Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.137524 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.140246 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.140462 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.145856 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc94e22-581e-470e-a34f-45ea5454a9ac-secret-volume\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.146330 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xsf\" (UniqueName: \"kubernetes.io/projected/afc94e22-581e-470e-a34f-45ea5454a9ac-kube-api-access-g9xsf\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.146721 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc94e22-581e-470e-a34f-45ea5454a9ac-config-volume\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.150132 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t"] Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.248421 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc94e22-581e-470e-a34f-45ea5454a9ac-config-volume\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.248478 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc94e22-581e-470e-a34f-45ea5454a9ac-secret-volume\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.248549 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xsf\" (UniqueName: \"kubernetes.io/projected/afc94e22-581e-470e-a34f-45ea5454a9ac-kube-api-access-g9xsf\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.249300 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc94e22-581e-470e-a34f-45ea5454a9ac-config-volume\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.263340 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc94e22-581e-470e-a34f-45ea5454a9ac-secret-volume\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.267237 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xsf\" (UniqueName: \"kubernetes.io/projected/afc94e22-581e-470e-a34f-45ea5454a9ac-kube-api-access-g9xsf\") pod \"collect-profiles-29397630-ptm7t\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.457965 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:00 crc kubenswrapper[4743]: I1123 00:30:00.858059 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t"] Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.486655 4743 generic.go:334] "Generic (PLEG): container finished" podID="afc94e22-581e-470e-a34f-45ea5454a9ac" containerID="25c4914927cfc30c382f3ee5466006dc444e991593e585c4b84e5bbba98045be" exitCode=0 Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.486897 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" event={"ID":"afc94e22-581e-470e-a34f-45ea5454a9ac","Type":"ContainerDied","Data":"25c4914927cfc30c382f3ee5466006dc444e991593e585c4b84e5bbba98045be"} Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.486968 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" event={"ID":"afc94e22-581e-470e-a34f-45ea5454a9ac","Type":"ContainerStarted","Data":"3dce40349992497bdb9e056b7a63aaeda6e26d2d8851c7d682b48c022a4430c8"} Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.752813 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5p2v9"] Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.754176 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.770680 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p2v9"] Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.866589 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-utilities\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.866642 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8nc\" (UniqueName: \"kubernetes.io/projected/aa03f051-27b5-447a-b8d1-8c51d7e56857-kube-api-access-dr8nc\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.866732 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-catalog-content\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.968203 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-utilities\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.968256 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8nc\" (UniqueName: \"kubernetes.io/projected/aa03f051-27b5-447a-b8d1-8c51d7e56857-kube-api-access-dr8nc\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.968290 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-catalog-content\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.968767 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-catalog-content\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.968777 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-utilities\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:01 crc kubenswrapper[4743]: I1123 00:30:01.986513 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8nc\" (UniqueName: \"kubernetes.io/projected/aa03f051-27b5-447a-b8d1-8c51d7e56857-kube-api-access-dr8nc\") pod \"redhat-operators-5p2v9\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.069039 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.287067 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p2v9"] Nov 23 00:30:02 crc kubenswrapper[4743]: W1123 00:30:02.298395 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa03f051_27b5_447a_b8d1_8c51d7e56857.slice/crio-2be316cf787d2914bc1080571dd6c9e29ec7ad5092c3063a8a2cdcbadd9f3a9e WatchSource:0}: Error finding container 2be316cf787d2914bc1080571dd6c9e29ec7ad5092c3063a8a2cdcbadd9f3a9e: Status 404 returned error can't find the container with id 2be316cf787d2914bc1080571dd6c9e29ec7ad5092c3063a8a2cdcbadd9f3a9e Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.493836 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerStarted","Data":"2be316cf787d2914bc1080571dd6c9e29ec7ad5092c3063a8a2cdcbadd9f3a9e"} Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.787538 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.878527 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc94e22-581e-470e-a34f-45ea5454a9ac-secret-volume\") pod \"afc94e22-581e-470e-a34f-45ea5454a9ac\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.878635 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc94e22-581e-470e-a34f-45ea5454a9ac-config-volume\") pod \"afc94e22-581e-470e-a34f-45ea5454a9ac\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.878691 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9xsf\" (UniqueName: \"kubernetes.io/projected/afc94e22-581e-470e-a34f-45ea5454a9ac-kube-api-access-g9xsf\") pod \"afc94e22-581e-470e-a34f-45ea5454a9ac\" (UID: \"afc94e22-581e-470e-a34f-45ea5454a9ac\") " Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.879648 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc94e22-581e-470e-a34f-45ea5454a9ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "afc94e22-581e-470e-a34f-45ea5454a9ac" (UID: "afc94e22-581e-470e-a34f-45ea5454a9ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.888562 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc94e22-581e-470e-a34f-45ea5454a9ac-kube-api-access-g9xsf" (OuterVolumeSpecName: "kube-api-access-g9xsf") pod "afc94e22-581e-470e-a34f-45ea5454a9ac" (UID: "afc94e22-581e-470e-a34f-45ea5454a9ac"). InnerVolumeSpecName "kube-api-access-g9xsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.888631 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc94e22-581e-470e-a34f-45ea5454a9ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afc94e22-581e-470e-a34f-45ea5454a9ac" (UID: "afc94e22-581e-470e-a34f-45ea5454a9ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.980612 4743 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc94e22-581e-470e-a34f-45ea5454a9ac-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.980645 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9xsf\" (UniqueName: \"kubernetes.io/projected/afc94e22-581e-470e-a34f-45ea5454a9ac-kube-api-access-g9xsf\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:02 crc kubenswrapper[4743]: I1123 00:30:02.980656 4743 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc94e22-581e-470e-a34f-45ea5454a9ac-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.500230 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" event={"ID":"afc94e22-581e-470e-a34f-45ea5454a9ac","Type":"ContainerDied","Data":"3dce40349992497bdb9e056b7a63aaeda6e26d2d8851c7d682b48c022a4430c8"} Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.500273 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dce40349992497bdb9e056b7a63aaeda6e26d2d8851c7d682b48c022a4430c8" Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.500284 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397630-ptm7t" Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.502141 4743 generic.go:334] "Generic (PLEG): container finished" podID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerID="62c11b33fe360b8751cc7503efeedfc52352f5ee855a0428fc2999644527b597" exitCode=0 Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.502204 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerDied","Data":"62c11b33fe360b8751cc7503efeedfc52352f5ee855a0428fc2999644527b597"} Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.504299 4743 generic.go:334] "Generic (PLEG): container finished" podID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerID="6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8" exitCode=0 Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.504321 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerDied","Data":"6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8"} Nov 23 00:30:03 crc kubenswrapper[4743]: I1123 00:30:03.506424 4743 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.513954 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerStarted","Data":"17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032"} Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.779101 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910647 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8wh7\" (UniqueName: \"kubernetes.io/projected/9e752650-5b99-4eb3-b31e-5960c60831ea-kube-api-access-b8wh7\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910742 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-root\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910773 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-buildcachedir\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910795 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-pull\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910816 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-run\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910834 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-push\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910852 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-buildworkdir\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910877 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-ca-bundles\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910875 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910911 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-build-blob-cache\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910951 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-node-pullsecrets\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.910970 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-system-configs\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.911011 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-proxy-ca-bundles\") pod \"9e752650-5b99-4eb3-b31e-5960c60831ea\" (UID: \"9e752650-5b99-4eb3-b31e-5960c60831ea\") " Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.911152 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.911219 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.911786 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.911993 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.912172 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.912342 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.915593 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.920292 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.921841 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:30:04 crc kubenswrapper[4743]: I1123 00:30:04.930313 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e752650-5b99-4eb3-b31e-5960c60831ea-kube-api-access-b8wh7" (OuterVolumeSpecName: "kube-api-access-b8wh7") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "kube-api-access-b8wh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011910 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9e752650-5b99-4eb3-b31e-5960c60831ea-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011938 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011948 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011958 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8wh7\" (UniqueName: \"kubernetes.io/projected/9e752650-5b99-4eb3-b31e-5960c60831ea-kube-api-access-b8wh7\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011966 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011974 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/9e752650-5b99-4eb3-b31e-5960c60831ea-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.011997 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.012007 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.012015 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e752650-5b99-4eb3-b31e-5960c60831ea-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.022249 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.113289 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.522323 4743 generic.go:334] "Generic (PLEG): container finished" podID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerID="17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032" exitCode=0 Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.522395 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerDied","Data":"17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032"} Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.525596 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"9e752650-5b99-4eb3-b31e-5960c60831ea","Type":"ContainerDied","Data":"432ff13da1b4eff5907b8701977e23df46226e48033d653dcbf613f34eedbcba"} Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.525629 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432ff13da1b4eff5907b8701977e23df46226e48033d653dcbf613f34eedbcba" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.525698 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.595968 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9e752650-5b99-4eb3-b31e-5960c60831ea" (UID: "9e752650-5b99-4eb3-b31e-5960c60831ea"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:05 crc kubenswrapper[4743]: I1123 00:30:05.621208 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9e752650-5b99-4eb3-b31e-5960c60831ea-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:06 crc kubenswrapper[4743]: I1123 00:30:06.536391 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerStarted","Data":"138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18"} Nov 23 00:30:06 crc kubenswrapper[4743]: I1123 00:30:06.558981 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5p2v9" podStartSLOduration=2.76539804 podStartE2EDuration="5.55896093s" podCreationTimestamp="2025-11-23 00:30:01 +0000 UTC" firstStartedPulling="2025-11-23 00:30:03.506234277 +0000 UTC m=+1395.584332404" lastFinishedPulling="2025-11-23 00:30:06.299797167 +0000 UTC m=+1398.377895294" observedRunningTime="2025-11-23 00:30:06.555161057 +0000 UTC m=+1398.633259204" watchObservedRunningTime="2025-11-23 00:30:06.55896093 +0000 UTC m=+1398.637059067" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.707284 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 23 00:30:09 crc kubenswrapper[4743]: E1123 00:30:09.708349 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="manage-dockerfile" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.708368 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="manage-dockerfile" Nov 23 00:30:09 crc kubenswrapper[4743]: E1123 00:30:09.708398 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="git-clone" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.708407 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="git-clone" Nov 23 00:30:09 crc kubenswrapper[4743]: E1123 00:30:09.708419 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc94e22-581e-470e-a34f-45ea5454a9ac" containerName="collect-profiles" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.708430 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc94e22-581e-470e-a34f-45ea5454a9ac" containerName="collect-profiles" Nov 23 00:30:09 crc kubenswrapper[4743]: E1123 00:30:09.708443 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="docker-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.708451 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="docker-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.708626 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e752650-5b99-4eb3-b31e-5960c60831ea" containerName="docker-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.708640 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc94e22-581e-470e-a34f-45ea5454a9ac" containerName="collect-profiles" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.709603 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.712701 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.712726 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.712966 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.712980 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.728123 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776229 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776270 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776293 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776313 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776329 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776362 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776406 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776428 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776446 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776462 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngfbv\" (UniqueName: \"kubernetes.io/projected/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-kube-api-access-ngfbv\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776589 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.776612 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877310 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877377 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877430 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877448 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877466 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877500 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877515 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877529 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877548 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877567 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877587 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.877604 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngfbv\" (UniqueName: \"kubernetes.io/projected/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-kube-api-access-ngfbv\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.878780 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879094 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879177 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879274 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879406 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879535 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879566 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.879589 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.880748 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.884196 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.884564 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:09 crc kubenswrapper[4743]: I1123 00:30:09.898936 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngfbv\" (UniqueName: \"kubernetes.io/projected/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-kube-api-access-ngfbv\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:10 crc kubenswrapper[4743]: I1123 00:30:10.029216 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:10 crc kubenswrapper[4743]: I1123 00:30:10.458607 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 23 00:30:10 crc kubenswrapper[4743]: W1123 00:30:10.466713 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22fa99ee_77b3_4ca5_a8e4_c07691751ce7.slice/crio-f36c58a9d87a389a305482147a440160a3205851cdb1f652324d6e7a2451bb3c WatchSource:0}: Error finding container f36c58a9d87a389a305482147a440160a3205851cdb1f652324d6e7a2451bb3c: Status 404 returned error can't find the container with id f36c58a9d87a389a305482147a440160a3205851cdb1f652324d6e7a2451bb3c Nov 23 00:30:10 crc kubenswrapper[4743]: I1123 00:30:10.568109 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"22fa99ee-77b3-4ca5-a8e4-c07691751ce7","Type":"ContainerStarted","Data":"f36c58a9d87a389a305482147a440160a3205851cdb1f652324d6e7a2451bb3c"} Nov 23 00:30:11 crc kubenswrapper[4743]: I1123 00:30:11.577725 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"22fa99ee-77b3-4ca5-a8e4-c07691751ce7","Type":"ContainerStarted","Data":"6ccdafe5e30ce6641371999e7e0e35a12ab581d0944b8b1315c9ef764b5e3e3b"} Nov 23 00:30:12 crc kubenswrapper[4743]: I1123 00:30:12.070006 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:12 crc kubenswrapper[4743]: I1123 00:30:12.070059 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:12 crc kubenswrapper[4743]: I1123 00:30:12.584004 4743 generic.go:334] "Generic (PLEG): container finished" podID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerID="6ccdafe5e30ce6641371999e7e0e35a12ab581d0944b8b1315c9ef764b5e3e3b" exitCode=0 Nov 23 00:30:12 crc kubenswrapper[4743]: I1123 00:30:12.584076 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"22fa99ee-77b3-4ca5-a8e4-c07691751ce7","Type":"ContainerDied","Data":"6ccdafe5e30ce6641371999e7e0e35a12ab581d0944b8b1315c9ef764b5e3e3b"} Nov 23 00:30:13 crc kubenswrapper[4743]: I1123 00:30:13.112948 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5p2v9" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="registry-server" probeResult="failure" output=< Nov 23 00:30:13 crc kubenswrapper[4743]: timeout: failed to connect service ":50051" within 1s Nov 23 00:30:13 crc kubenswrapper[4743]: > Nov 23 00:30:13 crc kubenswrapper[4743]: I1123 00:30:13.597039 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"22fa99ee-77b3-4ca5-a8e4-c07691751ce7","Type":"ContainerStarted","Data":"d2a1ad25ccbcd4626e7cfc7a0e002bad211e396ee67e2bce1dcb637ea0b12559"} Nov 23 00:30:19 crc kubenswrapper[4743]: I1123 00:30:19.845986 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=10.845971442 podStartE2EDuration="10.845971442s" podCreationTimestamp="2025-11-23 00:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:30:13.643711839 +0000 UTC m=+1405.721810006" watchObservedRunningTime="2025-11-23 00:30:19.845971442 +0000 UTC m=+1411.924069569" Nov 23 00:30:19 crc kubenswrapper[4743]: I1123 00:30:19.847255 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 23 00:30:19 crc kubenswrapper[4743]: I1123 00:30:19.847438 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerName="docker-build" containerID="cri-o://d2a1ad25ccbcd4626e7cfc7a0e002bad211e396ee67e2bce1dcb637ea0b12559" gracePeriod=30 Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.666222 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_22fa99ee-77b3-4ca5-a8e4-c07691751ce7/docker-build/0.log" Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.667119 4743 generic.go:334] "Generic (PLEG): container finished" podID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerID="d2a1ad25ccbcd4626e7cfc7a0e002bad211e396ee67e2bce1dcb637ea0b12559" exitCode=1 Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.667170 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"22fa99ee-77b3-4ca5-a8e4-c07691751ce7","Type":"ContainerDied","Data":"d2a1ad25ccbcd4626e7cfc7a0e002bad211e396ee67e2bce1dcb637ea0b12559"} Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.950272 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.958127 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.958275 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.960442 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.960866 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Nov 23 00:30:21 crc kubenswrapper[4743]: I1123 00:30:21.964311 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.069442 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.069626 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.069673 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.069696 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.069794 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070536 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vh76\" (UniqueName: \"kubernetes.io/projected/7a2bc814-e475-462e-bdca-2dc94870a39d-kube-api-access-6vh76\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070564 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070603 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070654 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070749 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070807 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.070877 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.113678 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172242 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172301 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172325 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172345 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172364 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172368 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172462 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172516 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172561 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172584 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172617 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172639 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vh76\" (UniqueName: \"kubernetes.io/projected/7a2bc814-e475-462e-bdca-2dc94870a39d-kube-api-access-6vh76\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172661 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.172717 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173085 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173099 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173302 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173442 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173480 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173524 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.173619 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.177300 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.178932 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.179209 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.199611 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vh76\" (UniqueName: \"kubernetes.io/projected/7a2bc814-e475-462e-bdca-2dc94870a39d-kube-api-access-6vh76\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.276772 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.350604 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5p2v9"] Nov 23 00:30:22 crc kubenswrapper[4743]: I1123 00:30:22.721117 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 23 00:30:22 crc kubenswrapper[4743]: W1123 00:30:22.728512 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a2bc814_e475_462e_bdca_2dc94870a39d.slice/crio-7698e380b8c16c7f98d0ff92219cfbe1796ff2db6c0746dd1d4f2cced7c3c3b0 WatchSource:0}: Error finding container 7698e380b8c16c7f98d0ff92219cfbe1796ff2db6c0746dd1d4f2cced7c3c3b0: Status 404 returned error can't find the container with id 7698e380b8c16c7f98d0ff92219cfbe1796ff2db6c0746dd1d4f2cced7c3c3b0 Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.236669 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_22fa99ee-77b3-4ca5-a8e4-c07691751ce7/docker-build/0.log" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.237430 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388388 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildworkdir\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388785 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-node-pullsecrets\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388810 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-proxy-ca-bundles\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388856 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-root\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388877 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388897 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildcachedir\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388940 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.388999 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-blob-cache\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389046 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-pull\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389062 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389101 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-push\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389204 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-run\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389250 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngfbv\" (UniqueName: \"kubernetes.io/projected/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-kube-api-access-ngfbv\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389292 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-ca-bundles\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389325 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-system-configs\") pod \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\" (UID: \"22fa99ee-77b3-4ca5-a8e4-c07691751ce7\") " Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.389979 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.390012 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.390030 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.390138 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.390277 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.390287 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.390318 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.394241 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-kube-api-access-ngfbv" (OuterVolumeSpecName: "kube-api-access-ngfbv") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "kube-api-access-ngfbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.394636 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.396590 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.429597 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491280 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491308 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491320 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491329 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491342 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491350 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngfbv\" (UniqueName: \"kubernetes.io/projected/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-kube-api-access-ngfbv\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491359 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.491367 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.680382 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_22fa99ee-77b3-4ca5-a8e4-c07691751ce7/docker-build/0.log" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.680913 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.680920 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"22fa99ee-77b3-4ca5-a8e4-c07691751ce7","Type":"ContainerDied","Data":"f36c58a9d87a389a305482147a440160a3205851cdb1f652324d6e7a2451bb3c"} Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.681059 4743 scope.go:117] "RemoveContainer" containerID="d2a1ad25ccbcd4626e7cfc7a0e002bad211e396ee67e2bce1dcb637ea0b12559" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.682520 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerStarted","Data":"6d5e19b07d1f1bf2d0d6b78d78452828eea6c9ec8c9d88907573adf16c4f11ec"} Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.682572 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerStarted","Data":"7698e380b8c16c7f98d0ff92219cfbe1796ff2db6c0746dd1d4f2cced7c3c3b0"} Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.682775 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5p2v9" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="registry-server" containerID="cri-o://138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18" gracePeriod=2 Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.690035 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.690116 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.690198 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.691824 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9dcd1bca2c6fe5058dfd781e9013fb345a66f90c8bac7c0726c518f47bebe38e"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.691938 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://9dcd1bca2c6fe5058dfd781e9013fb345a66f90c8bac7c0726c518f47bebe38e" gracePeriod=600 Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.713247 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "22fa99ee-77b3-4ca5-a8e4-c07691751ce7" (UID: "22fa99ee-77b3-4ca5-a8e4-c07691751ce7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.717433 4743 scope.go:117] "RemoveContainer" containerID="6ccdafe5e30ce6641371999e7e0e35a12ab581d0944b8b1315c9ef764b5e3e3b" Nov 23 00:30:23 crc kubenswrapper[4743]: I1123 00:30:23.794798 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22fa99ee-77b3-4ca5-a8e4-c07691751ce7-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.108458 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.115198 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.617079 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.698229 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="9dcd1bca2c6fe5058dfd781e9013fb345a66f90c8bac7c0726c518f47bebe38e" exitCode=0 Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.698289 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"9dcd1bca2c6fe5058dfd781e9013fb345a66f90c8bac7c0726c518f47bebe38e"} Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.698319 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15"} Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.698335 4743 scope.go:117] "RemoveContainer" containerID="2a423b2693fed3abc2a8a9390dc3769db28cfe3fcfc6b65f5803f65e8c4773a0" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.704643 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerDied","Data":"6d5e19b07d1f1bf2d0d6b78d78452828eea6c9ec8c9d88907573adf16c4f11ec"} Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.704585 4743 generic.go:334] "Generic (PLEG): container finished" podID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerID="6d5e19b07d1f1bf2d0d6b78d78452828eea6c9ec8c9d88907573adf16c4f11ec" exitCode=0 Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.709335 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p2v9" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.709683 4743 generic.go:334] "Generic (PLEG): container finished" podID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerID="138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18" exitCode=0 Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.709749 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerDied","Data":"138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18"} Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.709789 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p2v9" event={"ID":"aa03f051-27b5-447a-b8d1-8c51d7e56857","Type":"ContainerDied","Data":"2be316cf787d2914bc1080571dd6c9e29ec7ad5092c3063a8a2cdcbadd9f3a9e"} Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.713960 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-catalog-content\") pod \"aa03f051-27b5-447a-b8d1-8c51d7e56857\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.714092 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-utilities\") pod \"aa03f051-27b5-447a-b8d1-8c51d7e56857\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.714185 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr8nc\" (UniqueName: \"kubernetes.io/projected/aa03f051-27b5-447a-b8d1-8c51d7e56857-kube-api-access-dr8nc\") pod \"aa03f051-27b5-447a-b8d1-8c51d7e56857\" (UID: \"aa03f051-27b5-447a-b8d1-8c51d7e56857\") " Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.724604 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-utilities" (OuterVolumeSpecName: "utilities") pod "aa03f051-27b5-447a-b8d1-8c51d7e56857" (UID: "aa03f051-27b5-447a-b8d1-8c51d7e56857"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.740585 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" path="/var/lib/kubelet/pods/22fa99ee-77b3-4ca5-a8e4-c07691751ce7/volumes" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.744320 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa03f051-27b5-447a-b8d1-8c51d7e56857-kube-api-access-dr8nc" (OuterVolumeSpecName: "kube-api-access-dr8nc") pod "aa03f051-27b5-447a-b8d1-8c51d7e56857" (UID: "aa03f051-27b5-447a-b8d1-8c51d7e56857"). InnerVolumeSpecName "kube-api-access-dr8nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.748651 4743 scope.go:117] "RemoveContainer" containerID="138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.789682 4743 scope.go:117] "RemoveContainer" containerID="17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.826477 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.827543 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr8nc\" (UniqueName: \"kubernetes.io/projected/aa03f051-27b5-447a-b8d1-8c51d7e56857-kube-api-access-dr8nc\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.843904 4743 scope.go:117] "RemoveContainer" containerID="6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.864457 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa03f051-27b5-447a-b8d1-8c51d7e56857" (UID: "aa03f051-27b5-447a-b8d1-8c51d7e56857"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.883185 4743 scope.go:117] "RemoveContainer" containerID="138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18" Nov 23 00:30:24 crc kubenswrapper[4743]: E1123 00:30:24.883993 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18\": container with ID starting with 138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18 not found: ID does not exist" containerID="138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.884053 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18"} err="failed to get container status \"138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18\": rpc error: code = NotFound desc = could not find container \"138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18\": container with ID starting with 138416652eff57cae2d71114cddf88d541a96c9499e1d53cc07faec9c7befe18 not found: ID does not exist" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.884091 4743 scope.go:117] "RemoveContainer" containerID="17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032" Nov 23 00:30:24 crc kubenswrapper[4743]: E1123 00:30:24.884713 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032\": container with ID starting with 17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032 not found: ID does not exist" containerID="17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.884748 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032"} err="failed to get container status \"17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032\": rpc error: code = NotFound desc = could not find container \"17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032\": container with ID starting with 17d7e63d5be23344ac1a8d8e0cb52823250b53d65411bf4222becf34422fb032 not found: ID does not exist" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.884770 4743 scope.go:117] "RemoveContainer" containerID="6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8" Nov 23 00:30:24 crc kubenswrapper[4743]: E1123 00:30:24.885429 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8\": container with ID starting with 6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8 not found: ID does not exist" containerID="6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.885502 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8"} err="failed to get container status \"6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8\": rpc error: code = NotFound desc = could not find container \"6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8\": container with ID starting with 6b25b68a8d405af6ad743e47c84b4d66bbc8234aeb47c461855271fe58a6b7f8 not found: ID does not exist" Nov 23 00:30:24 crc kubenswrapper[4743]: I1123 00:30:24.928627 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa03f051-27b5-447a-b8d1-8c51d7e56857-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:30:25 crc kubenswrapper[4743]: I1123 00:30:25.054682 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5p2v9"] Nov 23 00:30:25 crc kubenswrapper[4743]: I1123 00:30:25.059472 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5p2v9"] Nov 23 00:30:25 crc kubenswrapper[4743]: I1123 00:30:25.717468 4743 generic.go:334] "Generic (PLEG): container finished" podID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerID="a8345a24191e792248773d24837e7fc9609dfe1cbd90a69cdcf27613a547c2dd" exitCode=0 Nov 23 00:30:25 crc kubenswrapper[4743]: I1123 00:30:25.717584 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerDied","Data":"a8345a24191e792248773d24837e7fc9609dfe1cbd90a69cdcf27613a547c2dd"} Nov 23 00:30:25 crc kubenswrapper[4743]: I1123 00:30:25.753430 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_7a2bc814-e475-462e-bdca-2dc94870a39d/manage-dockerfile/0.log" Nov 23 00:30:26 crc kubenswrapper[4743]: I1123 00:30:26.731706 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" path="/var/lib/kubelet/pods/aa03f051-27b5-447a-b8d1-8c51d7e56857/volumes" Nov 23 00:30:26 crc kubenswrapper[4743]: I1123 00:30:26.734394 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerStarted","Data":"c8b1e47919a9692e47a91a58520c391a7996305ae0859fc2bd0a4578d94cfac3"} Nov 23 00:30:26 crc kubenswrapper[4743]: I1123 00:30:26.763377 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.763346073 podStartE2EDuration="5.763346073s" podCreationTimestamp="2025-11-23 00:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:30:26.761756995 +0000 UTC m=+1418.839855152" watchObservedRunningTime="2025-11-23 00:30:26.763346073 +0000 UTC m=+1418.841444230" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.611956 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pr46m"] Nov 23 00:31:05 crc kubenswrapper[4743]: E1123 00:31:05.613067 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerName="manage-dockerfile" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613085 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerName="manage-dockerfile" Nov 23 00:31:05 crc kubenswrapper[4743]: E1123 00:31:05.613109 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="registry-server" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613116 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="registry-server" Nov 23 00:31:05 crc kubenswrapper[4743]: E1123 00:31:05.613125 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="extract-utilities" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613132 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="extract-utilities" Nov 23 00:31:05 crc kubenswrapper[4743]: E1123 00:31:05.613139 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="extract-content" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613145 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="extract-content" Nov 23 00:31:05 crc kubenswrapper[4743]: E1123 00:31:05.613154 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerName="docker-build" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613160 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerName="docker-build" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613299 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fa99ee-77b3-4ca5-a8e4-c07691751ce7" containerName="docker-build" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.613313 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa03f051-27b5-447a-b8d1-8c51d7e56857" containerName="registry-server" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.614311 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.627908 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pr46m"] Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.760589 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwflf\" (UniqueName: \"kubernetes.io/projected/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-kube-api-access-wwflf\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.760672 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-utilities\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.760865 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-catalog-content\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.862195 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwflf\" (UniqueName: \"kubernetes.io/projected/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-kube-api-access-wwflf\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.862280 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-utilities\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.862322 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-catalog-content\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.862818 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-catalog-content\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.863116 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-utilities\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.889571 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwflf\" (UniqueName: \"kubernetes.io/projected/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-kube-api-access-wwflf\") pod \"certified-operators-pr46m\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:05 crc kubenswrapper[4743]: I1123 00:31:05.981434 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:06 crc kubenswrapper[4743]: I1123 00:31:06.282912 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pr46m"] Nov 23 00:31:07 crc kubenswrapper[4743]: I1123 00:31:07.043017 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerStarted","Data":"8e541e46e694d2308619cc81bb2e3058182da10610588327006d252e713e2662"} Nov 23 00:31:08 crc kubenswrapper[4743]: I1123 00:31:08.054991 4743 generic.go:334] "Generic (PLEG): container finished" podID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerID="923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075" exitCode=0 Nov 23 00:31:08 crc kubenswrapper[4743]: I1123 00:31:08.055053 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerDied","Data":"923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075"} Nov 23 00:31:09 crc kubenswrapper[4743]: I1123 00:31:09.063045 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerStarted","Data":"4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794"} Nov 23 00:31:10 crc kubenswrapper[4743]: I1123 00:31:10.071134 4743 generic.go:334] "Generic (PLEG): container finished" podID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerID="4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794" exitCode=0 Nov 23 00:31:10 crc kubenswrapper[4743]: I1123 00:31:10.071229 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerDied","Data":"4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794"} Nov 23 00:31:11 crc kubenswrapper[4743]: I1123 00:31:11.079346 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerStarted","Data":"13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6"} Nov 23 00:31:11 crc kubenswrapper[4743]: I1123 00:31:11.099564 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pr46m" podStartSLOduration=3.6663100379999998 podStartE2EDuration="6.09954153s" podCreationTimestamp="2025-11-23 00:31:05 +0000 UTC" firstStartedPulling="2025-11-23 00:31:08.057866637 +0000 UTC m=+1460.135964804" lastFinishedPulling="2025-11-23 00:31:10.491098159 +0000 UTC m=+1462.569196296" observedRunningTime="2025-11-23 00:31:11.098199147 +0000 UTC m=+1463.176297294" watchObservedRunningTime="2025-11-23 00:31:11.09954153 +0000 UTC m=+1463.177639657" Nov 23 00:31:15 crc kubenswrapper[4743]: I1123 00:31:15.981733 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:15 crc kubenswrapper[4743]: I1123 00:31:15.982249 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:16 crc kubenswrapper[4743]: I1123 00:31:16.023841 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:16 crc kubenswrapper[4743]: I1123 00:31:16.171945 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:16 crc kubenswrapper[4743]: I1123 00:31:16.267740 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pr46m"] Nov 23 00:31:18 crc kubenswrapper[4743]: I1123 00:31:18.136642 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pr46m" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="registry-server" containerID="cri-o://13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6" gracePeriod=2 Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.069580 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.145293 4743 generic.go:334] "Generic (PLEG): container finished" podID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerID="13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6" exitCode=0 Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.145338 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerDied","Data":"13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6"} Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.145366 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr46m" event={"ID":"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6","Type":"ContainerDied","Data":"8e541e46e694d2308619cc81bb2e3058182da10610588327006d252e713e2662"} Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.145384 4743 scope.go:117] "RemoveContainer" containerID="13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.145442 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr46m" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.160915 4743 scope.go:117] "RemoveContainer" containerID="4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.176235 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwflf\" (UniqueName: \"kubernetes.io/projected/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-kube-api-access-wwflf\") pod \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.176364 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-utilities\") pod \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.176547 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-catalog-content\") pod \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\" (UID: \"082fd9fd-8bdf-4b82-be65-8f83d96e2cc6\") " Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.176931 4743 scope.go:117] "RemoveContainer" containerID="923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.177203 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-utilities" (OuterVolumeSpecName: "utilities") pod "082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" (UID: "082fd9fd-8bdf-4b82-be65-8f83d96e2cc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.181810 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-kube-api-access-wwflf" (OuterVolumeSpecName: "kube-api-access-wwflf") pod "082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" (UID: "082fd9fd-8bdf-4b82-be65-8f83d96e2cc6"). InnerVolumeSpecName "kube-api-access-wwflf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.215560 4743 scope.go:117] "RemoveContainer" containerID="13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6" Nov 23 00:31:19 crc kubenswrapper[4743]: E1123 00:31:19.216196 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6\": container with ID starting with 13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6 not found: ID does not exist" containerID="13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.216237 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6"} err="failed to get container status \"13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6\": rpc error: code = NotFound desc = could not find container \"13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6\": container with ID starting with 13a5b6726e5313b8c1aa3313cb4c16f004b562681346ef85b7e5187f28bfc4b6 not found: ID does not exist" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.216262 4743 scope.go:117] "RemoveContainer" containerID="4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794" Nov 23 00:31:19 crc kubenswrapper[4743]: E1123 00:31:19.216695 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794\": container with ID starting with 4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794 not found: ID does not exist" containerID="4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.216762 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794"} err="failed to get container status \"4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794\": rpc error: code = NotFound desc = could not find container \"4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794\": container with ID starting with 4fced67f77f3a4fb0a8862bca5f3b28d245db57a2b808c9bfecde3dffdaf5794 not found: ID does not exist" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.216806 4743 scope.go:117] "RemoveContainer" containerID="923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075" Nov 23 00:31:19 crc kubenswrapper[4743]: E1123 00:31:19.217141 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075\": container with ID starting with 923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075 not found: ID does not exist" containerID="923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.217176 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075"} err="failed to get container status \"923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075\": rpc error: code = NotFound desc = could not find container \"923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075\": container with ID starting with 923ce1dd20d1e87a4020be159f768725e9c14ec71e6d30c9bf3d485ba2516075 not found: ID does not exist" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.241047 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" (UID: "082fd9fd-8bdf-4b82-be65-8f83d96e2cc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.278652 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.278688 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwflf\" (UniqueName: \"kubernetes.io/projected/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-kube-api-access-wwflf\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.278699 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.478752 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pr46m"] Nov 23 00:31:19 crc kubenswrapper[4743]: I1123 00:31:19.485261 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pr46m"] Nov 23 00:31:20 crc kubenswrapper[4743]: I1123 00:31:20.732763 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" path="/var/lib/kubelet/pods/082fd9fd-8bdf-4b82-be65-8f83d96e2cc6/volumes" Nov 23 00:31:23 crc kubenswrapper[4743]: I1123 00:31:23.180018 4743 generic.go:334] "Generic (PLEG): container finished" podID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerID="c8b1e47919a9692e47a91a58520c391a7996305ae0859fc2bd0a4578d94cfac3" exitCode=0 Nov 23 00:31:23 crc kubenswrapper[4743]: I1123 00:31:23.180219 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerDied","Data":"c8b1e47919a9692e47a91a58520c391a7996305ae0859fc2bd0a4578d94cfac3"} Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.483939 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.558879 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-build-blob-cache\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.558966 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-run\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559011 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-pull\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559054 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-push\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559090 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-ca-bundles\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559124 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-buildcachedir\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559146 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-root\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559177 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-proxy-ca-bundles\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559206 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-buildworkdir\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559247 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559286 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-node-pullsecrets\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559365 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-system-configs\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559393 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vh76\" (UniqueName: \"kubernetes.io/projected/7a2bc814-e475-462e-bdca-2dc94870a39d-kube-api-access-6vh76\") pod \"7a2bc814-e475-462e-bdca-2dc94870a39d\" (UID: \"7a2bc814-e475-462e-bdca-2dc94870a39d\") " Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559683 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.559724 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.560365 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.560388 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.560402 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.560815 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.562883 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.566141 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.566379 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.566440 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2bc814-e475-462e-bdca-2dc94870a39d-kube-api-access-6vh76" (OuterVolumeSpecName: "kube-api-access-6vh76") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "kube-api-access-6vh76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.648755 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661111 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661140 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vh76\" (UniqueName: \"kubernetes.io/projected/7a2bc814-e475-462e-bdca-2dc94870a39d-kube-api-access-6vh76\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661151 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661164 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661175 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661187 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/7a2bc814-e475-462e-bdca-2dc94870a39d-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661197 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661207 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a2bc814-e475-462e-bdca-2dc94870a39d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661217 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:24 crc kubenswrapper[4743]: I1123 00:31:24.661227 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a2bc814-e475-462e-bdca-2dc94870a39d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:25 crc kubenswrapper[4743]: I1123 00:31:25.197809 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"7a2bc814-e475-462e-bdca-2dc94870a39d","Type":"ContainerDied","Data":"7698e380b8c16c7f98d0ff92219cfbe1796ff2db6c0746dd1d4f2cced7c3c3b0"} Nov 23 00:31:25 crc kubenswrapper[4743]: I1123 00:31:25.197857 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7698e380b8c16c7f98d0ff92219cfbe1796ff2db6c0746dd1d4f2cced7c3c3b0" Nov 23 00:31:25 crc kubenswrapper[4743]: I1123 00:31:25.198055 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 23 00:31:25 crc kubenswrapper[4743]: I1123 00:31:25.402095 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7a2bc814-e475-462e-bdca-2dc94870a39d" (UID: "7a2bc814-e475-462e-bdca-2dc94870a39d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:25 crc kubenswrapper[4743]: I1123 00:31:25.473741 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a2bc814-e475-462e-bdca-2dc94870a39d-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.751948 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Nov 23 00:31:34 crc kubenswrapper[4743]: E1123 00:31:34.752790 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="docker-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.752806 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="docker-build" Nov 23 00:31:34 crc kubenswrapper[4743]: E1123 00:31:34.752816 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="extract-utilities" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.752823 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="extract-utilities" Nov 23 00:31:34 crc kubenswrapper[4743]: E1123 00:31:34.752829 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="extract-content" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.752835 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="extract-content" Nov 23 00:31:34 crc kubenswrapper[4743]: E1123 00:31:34.752845 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="manage-dockerfile" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.752874 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="manage-dockerfile" Nov 23 00:31:34 crc kubenswrapper[4743]: E1123 00:31:34.752885 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="registry-server" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.752890 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="registry-server" Nov 23 00:31:34 crc kubenswrapper[4743]: E1123 00:31:34.752897 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="git-clone" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.752903 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="git-clone" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.753037 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="082fd9fd-8bdf-4b82-be65-8f83d96e2cc6" containerName="registry-server" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.753048 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2bc814-e475-462e-bdca-2dc94870a39d" containerName="docker-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.753672 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.756505 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.756574 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.756996 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.757915 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.781740 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.819706 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmrf\" (UniqueName: \"kubernetes.io/projected/4ece0e32-96ba-4956-907e-332529426c66-kube-api-access-zxmrf\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.819783 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.819922 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820068 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820174 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820205 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820232 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820289 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820315 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820415 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820460 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.820505 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.922893 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxmrf\" (UniqueName: \"kubernetes.io/projected/4ece0e32-96ba-4956-907e-332529426c66-kube-api-access-zxmrf\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.922977 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923021 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923081 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923136 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923168 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923178 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923201 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923279 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923328 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923370 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923408 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923457 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923476 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923572 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.923675 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.924166 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.924357 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.924762 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.924825 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.925611 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.929613 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.929708 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:34 crc kubenswrapper[4743]: I1123 00:31:34.945928 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxmrf\" (UniqueName: \"kubernetes.io/projected/4ece0e32-96ba-4956-907e-332529426c66-kube-api-access-zxmrf\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:35 crc kubenswrapper[4743]: I1123 00:31:35.082172 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:35 crc kubenswrapper[4743]: I1123 00:31:35.320117 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Nov 23 00:31:36 crc kubenswrapper[4743]: I1123 00:31:36.287101 4743 generic.go:334] "Generic (PLEG): container finished" podID="4ece0e32-96ba-4956-907e-332529426c66" containerID="46e160ae9610b02a638920b9e52ac837ba8f39b5d365ca1343b2ba7a05de38fb" exitCode=0 Nov 23 00:31:36 crc kubenswrapper[4743]: I1123 00:31:36.287159 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"4ece0e32-96ba-4956-907e-332529426c66","Type":"ContainerDied","Data":"46e160ae9610b02a638920b9e52ac837ba8f39b5d365ca1343b2ba7a05de38fb"} Nov 23 00:31:36 crc kubenswrapper[4743]: I1123 00:31:36.287212 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"4ece0e32-96ba-4956-907e-332529426c66","Type":"ContainerStarted","Data":"1c2b101ddf4eb0a2a15b1008236ca26a480d5c2136cd905f7c9b533200bc114a"} Nov 23 00:31:37 crc kubenswrapper[4743]: I1123 00:31:37.298690 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_4ece0e32-96ba-4956-907e-332529426c66/docker-build/0.log" Nov 23 00:31:37 crc kubenswrapper[4743]: I1123 00:31:37.300311 4743 generic.go:334] "Generic (PLEG): container finished" podID="4ece0e32-96ba-4956-907e-332529426c66" containerID="eb91331239e0fa2f884d83de42a4ce883f2f148c4d742d447faad28d05dc54b9" exitCode=1 Nov 23 00:31:37 crc kubenswrapper[4743]: I1123 00:31:37.300384 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"4ece0e32-96ba-4956-907e-332529426c66","Type":"ContainerDied","Data":"eb91331239e0fa2f884d83de42a4ce883f2f148c4d742d447faad28d05dc54b9"} Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.641770 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_4ece0e32-96ba-4956-907e-332529426c66/docker-build/0.log" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.642908 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.683714 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-buildcachedir\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.683835 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.683870 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-root\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.683903 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-buildworkdir\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.683958 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-system-configs\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684031 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-push\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684114 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-pull\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684212 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-node-pullsecrets\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684290 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxmrf\" (UniqueName: \"kubernetes.io/projected/4ece0e32-96ba-4956-907e-332529426c66-kube-api-access-zxmrf\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684357 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-ca-bundles\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684379 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684398 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-run\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684463 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-build-blob-cache\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684535 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-proxy-ca-bundles\") pod \"4ece0e32-96ba-4956-907e-332529426c66\" (UID: \"4ece0e32-96ba-4956-907e-332529426c66\") " Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.684959 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.685105 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.685250 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.685272 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.685286 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece0e32-96ba-4956-907e-332529426c66-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.685873 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.686039 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.686111 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.686175 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.686595 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.690543 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.690551 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ece0e32-96ba-4956-907e-332529426c66-kube-api-access-zxmrf" (OuterVolumeSpecName: "kube-api-access-zxmrf") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "kube-api-access-zxmrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.691030 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "4ece0e32-96ba-4956-907e-332529426c66" (UID: "4ece0e32-96ba-4956-907e-332529426c66"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786388 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786421 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/4ece0e32-96ba-4956-907e-332529426c66-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786437 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxmrf\" (UniqueName: \"kubernetes.io/projected/4ece0e32-96ba-4956-907e-332529426c66-kube-api-access-zxmrf\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786449 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786461 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786470 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786478 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786530 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece0e32-96ba-4956-907e-332529426c66-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:38 crc kubenswrapper[4743]: I1123 00:31:38.786545 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece0e32-96ba-4956-907e-332529426c66-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:39 crc kubenswrapper[4743]: I1123 00:31:39.320446 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_4ece0e32-96ba-4956-907e-332529426c66/docker-build/0.log" Nov 23 00:31:39 crc kubenswrapper[4743]: I1123 00:31:39.321858 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"4ece0e32-96ba-4956-907e-332529426c66","Type":"ContainerDied","Data":"1c2b101ddf4eb0a2a15b1008236ca26a480d5c2136cd905f7c9b533200bc114a"} Nov 23 00:31:39 crc kubenswrapper[4743]: I1123 00:31:39.321927 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c2b101ddf4eb0a2a15b1008236ca26a480d5c2136cd905f7c9b533200bc114a" Nov 23 00:31:39 crc kubenswrapper[4743]: I1123 00:31:39.322107 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.190350 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bv448"] Nov 23 00:31:42 crc kubenswrapper[4743]: E1123 00:31:42.190677 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece0e32-96ba-4956-907e-332529426c66" containerName="docker-build" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.190708 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece0e32-96ba-4956-907e-332529426c66" containerName="docker-build" Nov 23 00:31:42 crc kubenswrapper[4743]: E1123 00:31:42.190724 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece0e32-96ba-4956-907e-332529426c66" containerName="manage-dockerfile" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.190732 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece0e32-96ba-4956-907e-332529426c66" containerName="manage-dockerfile" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.190875 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ece0e32-96ba-4956-907e-332529426c66" containerName="docker-build" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.191911 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.210550 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv448"] Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.250883 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-utilities\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.251001 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rz6\" (UniqueName: \"kubernetes.io/projected/c06f7727-694e-4f12-9c0c-2aa0d50d4551-kube-api-access-x2rz6\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.251052 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-catalog-content\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.352564 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-utilities\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.352655 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rz6\" (UniqueName: \"kubernetes.io/projected/c06f7727-694e-4f12-9c0c-2aa0d50d4551-kube-api-access-x2rz6\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.352689 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-catalog-content\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.353067 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-utilities\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.353188 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-catalog-content\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.380740 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rz6\" (UniqueName: \"kubernetes.io/projected/c06f7727-694e-4f12-9c0c-2aa0d50d4551-kube-api-access-x2rz6\") pod \"community-operators-bv448\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.513210 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:42 crc kubenswrapper[4743]: I1123 00:31:42.818551 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv448"] Nov 23 00:31:43 crc kubenswrapper[4743]: I1123 00:31:43.351503 4743 generic.go:334] "Generic (PLEG): container finished" podID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerID="c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a" exitCode=0 Nov 23 00:31:43 crc kubenswrapper[4743]: I1123 00:31:43.351555 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv448" event={"ID":"c06f7727-694e-4f12-9c0c-2aa0d50d4551","Type":"ContainerDied","Data":"c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a"} Nov 23 00:31:43 crc kubenswrapper[4743]: I1123 00:31:43.352449 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv448" event={"ID":"c06f7727-694e-4f12-9c0c-2aa0d50d4551","Type":"ContainerStarted","Data":"2fb780909fcafa4a75131bd2309771f7783a063cb55ed7016e6f5f4ebf539702"} Nov 23 00:31:44 crc kubenswrapper[4743]: I1123 00:31:44.361431 4743 generic.go:334] "Generic (PLEG): container finished" podID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerID="bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e" exitCode=0 Nov 23 00:31:44 crc kubenswrapper[4743]: I1123 00:31:44.361512 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv448" event={"ID":"c06f7727-694e-4f12-9c0c-2aa0d50d4551","Type":"ContainerDied","Data":"bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e"} Nov 23 00:31:45 crc kubenswrapper[4743]: I1123 00:31:45.244890 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Nov 23 00:31:45 crc kubenswrapper[4743]: I1123 00:31:45.249262 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Nov 23 00:31:45 crc kubenswrapper[4743]: I1123 00:31:45.369528 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv448" event={"ID":"c06f7727-694e-4f12-9c0c-2aa0d50d4551","Type":"ContainerStarted","Data":"02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f"} Nov 23 00:31:45 crc kubenswrapper[4743]: I1123 00:31:45.386504 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bv448" podStartSLOduration=1.9499329840000001 podStartE2EDuration="3.386477827s" podCreationTimestamp="2025-11-23 00:31:42 +0000 UTC" firstStartedPulling="2025-11-23 00:31:43.353028013 +0000 UTC m=+1495.431126140" lastFinishedPulling="2025-11-23 00:31:44.789572816 +0000 UTC m=+1496.867670983" observedRunningTime="2025-11-23 00:31:45.385945044 +0000 UTC m=+1497.464043251" watchObservedRunningTime="2025-11-23 00:31:45.386477827 +0000 UTC m=+1497.464575954" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.731996 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ece0e32-96ba-4956-907e-332529426c66" path="/var/lib/kubelet/pods/4ece0e32-96ba-4956-907e-332529426c66/volumes" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.878718 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.880181 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.882052 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.882439 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.882937 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.883384 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:31:46 crc kubenswrapper[4743]: I1123 00:31:46.897441 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013718 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013799 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xsv\" (UniqueName: \"kubernetes.io/projected/962c988e-6ed7-4619-953f-5b614bd77225-kube-api-access-z2xsv\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013835 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013864 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013897 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013945 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.013980 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.014001 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.014024 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.014065 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.014112 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.014167 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115307 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115351 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115387 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115417 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xsv\" (UniqueName: \"kubernetes.io/projected/962c988e-6ed7-4619-953f-5b614bd77225-kube-api-access-z2xsv\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115433 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115446 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115471 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115524 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115577 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115612 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115633 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115654 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115711 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115803 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.115808 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.116028 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.116557 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.116717 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.117004 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.117121 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.117123 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.122422 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.122863 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.133236 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xsv\" (UniqueName: \"kubernetes.io/projected/962c988e-6ed7-4619-953f-5b614bd77225-kube-api-access-z2xsv\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.204522 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:47 crc kubenswrapper[4743]: I1123 00:31:47.628921 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Nov 23 00:31:47 crc kubenswrapper[4743]: W1123 00:31:47.639888 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod962c988e_6ed7_4619_953f_5b614bd77225.slice/crio-83179685d8213acccdad861c9ea5ff6131c07f78508e0dfc2a79c24b59080a66 WatchSource:0}: Error finding container 83179685d8213acccdad861c9ea5ff6131c07f78508e0dfc2a79c24b59080a66: Status 404 returned error can't find the container with id 83179685d8213acccdad861c9ea5ff6131c07f78508e0dfc2a79c24b59080a66 Nov 23 00:31:48 crc kubenswrapper[4743]: I1123 00:31:48.390243 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerStarted","Data":"5fd54925f9a8483f6291c243a1caab1866c213501fe444f095fc88c180a89017"} Nov 23 00:31:48 crc kubenswrapper[4743]: I1123 00:31:48.390548 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerStarted","Data":"83179685d8213acccdad861c9ea5ff6131c07f78508e0dfc2a79c24b59080a66"} Nov 23 00:31:49 crc kubenswrapper[4743]: I1123 00:31:49.399715 4743 generic.go:334] "Generic (PLEG): container finished" podID="962c988e-6ed7-4619-953f-5b614bd77225" containerID="5fd54925f9a8483f6291c243a1caab1866c213501fe444f095fc88c180a89017" exitCode=0 Nov 23 00:31:49 crc kubenswrapper[4743]: I1123 00:31:49.399774 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerDied","Data":"5fd54925f9a8483f6291c243a1caab1866c213501fe444f095fc88c180a89017"} Nov 23 00:31:50 crc kubenswrapper[4743]: I1123 00:31:50.409024 4743 generic.go:334] "Generic (PLEG): container finished" podID="962c988e-6ed7-4619-953f-5b614bd77225" containerID="cf2cc6586aa24561612756d2c15a728a6cbe82fe5e29dd3c4a02bd817867351d" exitCode=0 Nov 23 00:31:50 crc kubenswrapper[4743]: I1123 00:31:50.409089 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerDied","Data":"cf2cc6586aa24561612756d2c15a728a6cbe82fe5e29dd3c4a02bd817867351d"} Nov 23 00:31:50 crc kubenswrapper[4743]: I1123 00:31:50.470815 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_962c988e-6ed7-4619-953f-5b614bd77225/manage-dockerfile/0.log" Nov 23 00:31:51 crc kubenswrapper[4743]: I1123 00:31:51.432238 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerStarted","Data":"7b1b7ce438ff522041e094ea8977f5639c2c76c95857a0d7c8cc4f942d6e22d4"} Nov 23 00:31:51 crc kubenswrapper[4743]: I1123 00:31:51.481066 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.481038895 podStartE2EDuration="5.481038895s" podCreationTimestamp="2025-11-23 00:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:31:51.465073287 +0000 UTC m=+1503.543171434" watchObservedRunningTime="2025-11-23 00:31:51.481038895 +0000 UTC m=+1503.559137022" Nov 23 00:31:52 crc kubenswrapper[4743]: I1123 00:31:52.514547 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:52 crc kubenswrapper[4743]: I1123 00:31:52.514643 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:52 crc kubenswrapper[4743]: I1123 00:31:52.572520 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:53 crc kubenswrapper[4743]: I1123 00:31:53.481529 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:53 crc kubenswrapper[4743]: I1123 00:31:53.527546 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv448"] Nov 23 00:31:55 crc kubenswrapper[4743]: I1123 00:31:55.466091 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bv448" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="registry-server" containerID="cri-o://02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f" gracePeriod=2 Nov 23 00:31:55 crc kubenswrapper[4743]: I1123 00:31:55.892661 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.055115 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-utilities\") pod \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.055205 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-catalog-content\") pod \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.055291 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2rz6\" (UniqueName: \"kubernetes.io/projected/c06f7727-694e-4f12-9c0c-2aa0d50d4551-kube-api-access-x2rz6\") pod \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\" (UID: \"c06f7727-694e-4f12-9c0c-2aa0d50d4551\") " Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.056818 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-utilities" (OuterVolumeSpecName: "utilities") pod "c06f7727-694e-4f12-9c0c-2aa0d50d4551" (UID: "c06f7727-694e-4f12-9c0c-2aa0d50d4551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.063553 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06f7727-694e-4f12-9c0c-2aa0d50d4551-kube-api-access-x2rz6" (OuterVolumeSpecName: "kube-api-access-x2rz6") pod "c06f7727-694e-4f12-9c0c-2aa0d50d4551" (UID: "c06f7727-694e-4f12-9c0c-2aa0d50d4551"). InnerVolumeSpecName "kube-api-access-x2rz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.115145 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c06f7727-694e-4f12-9c0c-2aa0d50d4551" (UID: "c06f7727-694e-4f12-9c0c-2aa0d50d4551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.157468 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.157525 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06f7727-694e-4f12-9c0c-2aa0d50d4551-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.157537 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2rz6\" (UniqueName: \"kubernetes.io/projected/c06f7727-694e-4f12-9c0c-2aa0d50d4551-kube-api-access-x2rz6\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.474063 4743 generic.go:334] "Generic (PLEG): container finished" podID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerID="02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f" exitCode=0 Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.474187 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv448" event={"ID":"c06f7727-694e-4f12-9c0c-2aa0d50d4551","Type":"ContainerDied","Data":"02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f"} Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.474235 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv448" event={"ID":"c06f7727-694e-4f12-9c0c-2aa0d50d4551","Type":"ContainerDied","Data":"2fb780909fcafa4a75131bd2309771f7783a063cb55ed7016e6f5f4ebf539702"} Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.474245 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv448" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.474256 4743 scope.go:117] "RemoveContainer" containerID="02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.477437 4743 generic.go:334] "Generic (PLEG): container finished" podID="962c988e-6ed7-4619-953f-5b614bd77225" containerID="7b1b7ce438ff522041e094ea8977f5639c2c76c95857a0d7c8cc4f942d6e22d4" exitCode=0 Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.477471 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerDied","Data":"7b1b7ce438ff522041e094ea8977f5639c2c76c95857a0d7c8cc4f942d6e22d4"} Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.500246 4743 scope.go:117] "RemoveContainer" containerID="bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.523913 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv448"] Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.524209 4743 scope.go:117] "RemoveContainer" containerID="c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.526285 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bv448"] Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.544338 4743 scope.go:117] "RemoveContainer" containerID="02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f" Nov 23 00:31:56 crc kubenswrapper[4743]: E1123 00:31:56.544978 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f\": container with ID starting with 02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f not found: ID does not exist" containerID="02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.545028 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f"} err="failed to get container status \"02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f\": rpc error: code = NotFound desc = could not find container \"02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f\": container with ID starting with 02844742e71be9bb06f130fa22ff0d0677bfae61f04d96f53c702c13303b009f not found: ID does not exist" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.545061 4743 scope.go:117] "RemoveContainer" containerID="bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e" Nov 23 00:31:56 crc kubenswrapper[4743]: E1123 00:31:56.545543 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e\": container with ID starting with bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e not found: ID does not exist" containerID="bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.545719 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e"} err="failed to get container status \"bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e\": rpc error: code = NotFound desc = could not find container \"bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e\": container with ID starting with bd3b66b8a04193b5a6a66d35fbd4ade0385ab7c4e74753dea77a2e3b62a4693e not found: ID does not exist" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.545793 4743 scope.go:117] "RemoveContainer" containerID="c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a" Nov 23 00:31:56 crc kubenswrapper[4743]: E1123 00:31:56.546072 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a\": container with ID starting with c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a not found: ID does not exist" containerID="c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.546103 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a"} err="failed to get container status \"c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a\": rpc error: code = NotFound desc = could not find container \"c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a\": container with ID starting with c76d7a6f2b4bc330fb977a7f4d9f74c2771549bbf9cd07299157d5f68d39a71a not found: ID does not exist" Nov 23 00:31:56 crc kubenswrapper[4743]: I1123 00:31:56.729790 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" path="/var/lib/kubelet/pods/c06f7727-694e-4f12-9c0c-2aa0d50d4551/volumes" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.809860 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.984595 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-run\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.985184 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-system-configs\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.985372 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-pull\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.985541 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.985576 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-push\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986031 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-root\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986096 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-proxy-ca-bundles\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986130 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-node-pullsecrets\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986151 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-ca-bundles\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986177 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-buildcachedir\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986227 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-build-blob-cache\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986261 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986272 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-buildworkdir\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986349 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.986396 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2xsv\" (UniqueName: \"kubernetes.io/projected/962c988e-6ed7-4619-953f-5b614bd77225-kube-api-access-z2xsv\") pod \"962c988e-6ed7-4619-953f-5b614bd77225\" (UID: \"962c988e-6ed7-4619-953f-5b614bd77225\") " Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.987214 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.987260 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/962c988e-6ed7-4619-953f-5b614bd77225-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.987283 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.987432 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.987783 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.988360 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.988574 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.991381 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.991476 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962c988e-6ed7-4619-953f-5b614bd77225-kube-api-access-z2xsv" (OuterVolumeSpecName: "kube-api-access-z2xsv") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "kube-api-access-z2xsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.992033 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:31:57 crc kubenswrapper[4743]: I1123 00:31:57.996175 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.001162 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "962c988e-6ed7-4619-953f-5b614bd77225" (UID: "962c988e-6ed7-4619-953f-5b614bd77225"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.088851 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.088916 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.088937 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2xsv\" (UniqueName: \"kubernetes.io/projected/962c988e-6ed7-4619-953f-5b614bd77225-kube-api-access-z2xsv\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.088961 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.088981 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.089001 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/962c988e-6ed7-4619-953f-5b614bd77225-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.089020 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/962c988e-6ed7-4619-953f-5b614bd77225-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.089039 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.089056 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962c988e-6ed7-4619-953f-5b614bd77225-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.498008 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"962c988e-6ed7-4619-953f-5b614bd77225","Type":"ContainerDied","Data":"83179685d8213acccdad861c9ea5ff6131c07f78508e0dfc2a79c24b59080a66"} Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.498353 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83179685d8213acccdad861c9ea5ff6131c07f78508e0dfc2a79c24b59080a66" Nov 23 00:31:58 crc kubenswrapper[4743]: I1123 00:31:58.498213 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.788583 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Nov 23 00:32:01 crc kubenswrapper[4743]: E1123 00:32:01.789515 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="extract-content" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789541 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="extract-content" Nov 23 00:32:01 crc kubenswrapper[4743]: E1123 00:32:01.789559 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="docker-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789570 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="docker-build" Nov 23 00:32:01 crc kubenswrapper[4743]: E1123 00:32:01.789589 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="registry-server" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789622 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="registry-server" Nov 23 00:32:01 crc kubenswrapper[4743]: E1123 00:32:01.789639 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="extract-utilities" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789650 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="extract-utilities" Nov 23 00:32:01 crc kubenswrapper[4743]: E1123 00:32:01.789666 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="manage-dockerfile" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789676 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="manage-dockerfile" Nov 23 00:32:01 crc kubenswrapper[4743]: E1123 00:32:01.789690 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="git-clone" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789701 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="git-clone" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789890 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06f7727-694e-4f12-9c0c-2aa0d50d4551" containerName="registry-server" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.789912 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="962c988e-6ed7-4619-953f-5b614bd77225" containerName="docker-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.791049 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.793152 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.793418 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.794297 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.794726 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.805112 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.945376 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.945431 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.945710 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.945794 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.945862 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.945937 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.946001 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fcr\" (UniqueName: \"kubernetes.io/projected/321e5a72-9a42-4259-9f44-3c9708385b47-kube-api-access-55fcr\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.946156 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.946280 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.946324 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.946376 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:01 crc kubenswrapper[4743]: I1123 00:32:01.946470 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047627 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047713 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047769 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047799 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047838 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047855 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047878 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047898 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047920 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fcr\" (UniqueName: \"kubernetes.io/projected/321e5a72-9a42-4259-9f44-3c9708385b47-kube-api-access-55fcr\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047949 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.047982 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.048003 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.048270 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.048644 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.048842 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.048994 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.049211 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.049331 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.049419 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.049581 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.049610 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.056321 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.061046 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.065284 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fcr\" (UniqueName: \"kubernetes.io/projected/321e5a72-9a42-4259-9f44-3c9708385b47-kube-api-access-55fcr\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.108740 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.342377 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Nov 23 00:32:02 crc kubenswrapper[4743]: I1123 00:32:02.531248 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"321e5a72-9a42-4259-9f44-3c9708385b47","Type":"ContainerStarted","Data":"020725cce41c95583cb7d736118589349d11619d30eac58b4fa30ca881d82f42"} Nov 23 00:32:03 crc kubenswrapper[4743]: I1123 00:32:03.538792 4743 generic.go:334] "Generic (PLEG): container finished" podID="321e5a72-9a42-4259-9f44-3c9708385b47" containerID="08eb692bd0713878d017855b58e0d0ffe5df9cdcb6027c919f630352e5a1347f" exitCode=0 Nov 23 00:32:03 crc kubenswrapper[4743]: I1123 00:32:03.538898 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"321e5a72-9a42-4259-9f44-3c9708385b47","Type":"ContainerDied","Data":"08eb692bd0713878d017855b58e0d0ffe5df9cdcb6027c919f630352e5a1347f"} Nov 23 00:32:04 crc kubenswrapper[4743]: I1123 00:32:04.547474 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_321e5a72-9a42-4259-9f44-3c9708385b47/docker-build/0.log" Nov 23 00:32:04 crc kubenswrapper[4743]: I1123 00:32:04.547887 4743 generic.go:334] "Generic (PLEG): container finished" podID="321e5a72-9a42-4259-9f44-3c9708385b47" containerID="4720ea2a5a1ce0a3c9ab78fbaa6b3d28ba60df6e90561808edba19cf8c272758" exitCode=1 Nov 23 00:32:04 crc kubenswrapper[4743]: I1123 00:32:04.547917 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"321e5a72-9a42-4259-9f44-3c9708385b47","Type":"ContainerDied","Data":"4720ea2a5a1ce0a3c9ab78fbaa6b3d28ba60df6e90561808edba19cf8c272758"} Nov 23 00:32:05 crc kubenswrapper[4743]: I1123 00:32:05.854567 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_321e5a72-9a42-4259-9f44-3c9708385b47/docker-build/0.log" Nov 23 00:32:05 crc kubenswrapper[4743]: I1123 00:32:05.855342 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005598 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-proxy-ca-bundles\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005692 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-buildworkdir\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005732 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-buildcachedir\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005808 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-run\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005840 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-node-pullsecrets\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005858 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fcr\" (UniqueName: \"kubernetes.io/projected/321e5a72-9a42-4259-9f44-3c9708385b47-kube-api-access-55fcr\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005905 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-root\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005931 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-push\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.005978 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-pull\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.006010 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-system-configs\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.006050 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-ca-bundles\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.006101 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-build-blob-cache\") pod \"321e5a72-9a42-4259-9f44-3c9708385b47\" (UID: \"321e5a72-9a42-4259-9f44-3c9708385b47\") " Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.006843 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.006933 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.007026 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.007066 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.007078 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.007963 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.008104 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.008231 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.008996 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.016683 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.016753 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.021751 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321e5a72-9a42-4259-9f44-3c9708385b47-kube-api-access-55fcr" (OuterVolumeSpecName: "kube-api-access-55fcr") pod "321e5a72-9a42-4259-9f44-3c9708385b47" (UID: "321e5a72-9a42-4259-9f44-3c9708385b47"). InnerVolumeSpecName "kube-api-access-55fcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.108908 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.108974 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.108998 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/321e5a72-9a42-4259-9f44-3c9708385b47-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109020 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109039 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109059 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109078 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/321e5a72-9a42-4259-9f44-3c9708385b47-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109098 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109119 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109137 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/321e5a72-9a42-4259-9f44-3c9708385b47-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109158 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/321e5a72-9a42-4259-9f44-3c9708385b47-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.109178 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55fcr\" (UniqueName: \"kubernetes.io/projected/321e5a72-9a42-4259-9f44-3c9708385b47-kube-api-access-55fcr\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.564707 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_321e5a72-9a42-4259-9f44-3c9708385b47/docker-build/0.log" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.565151 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"321e5a72-9a42-4259-9f44-3c9708385b47","Type":"ContainerDied","Data":"020725cce41c95583cb7d736118589349d11619d30eac58b4fa30ca881d82f42"} Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.565180 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="020725cce41c95583cb7d736118589349d11619d30eac58b4fa30ca881d82f42" Nov 23 00:32:06 crc kubenswrapper[4743]: I1123 00:32:06.565208 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Nov 23 00:32:12 crc kubenswrapper[4743]: I1123 00:32:12.791194 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Nov 23 00:32:12 crc kubenswrapper[4743]: I1123 00:32:12.805384 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.729133 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321e5a72-9a42-4259-9f44-3c9708385b47" path="/var/lib/kubelet/pods/321e5a72-9a42-4259-9f44-3c9708385b47/volumes" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.857359 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Nov 23 00:32:14 crc kubenswrapper[4743]: E1123 00:32:14.857620 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321e5a72-9a42-4259-9f44-3c9708385b47" containerName="manage-dockerfile" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.857635 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="321e5a72-9a42-4259-9f44-3c9708385b47" containerName="manage-dockerfile" Nov 23 00:32:14 crc kubenswrapper[4743]: E1123 00:32:14.857652 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321e5a72-9a42-4259-9f44-3c9708385b47" containerName="docker-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.857658 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="321e5a72-9a42-4259-9f44-3c9708385b47" containerName="docker-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.857762 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="321e5a72-9a42-4259-9f44-3c9708385b47" containerName="docker-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.858864 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.861350 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.863294 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.863401 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.863513 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.883007 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.952339 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.952603 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tnpk\" (UniqueName: \"kubernetes.io/projected/b130c5b3-ffa1-4624-9e41-5bdfc601e525-kube-api-access-8tnpk\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.952722 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.952779 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953021 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953178 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953377 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953429 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953472 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953559 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953603 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:14 crc kubenswrapper[4743]: I1123 00:32:14.953639 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.055460 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.055867 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tnpk\" (UniqueName: \"kubernetes.io/projected/b130c5b3-ffa1-4624-9e41-5bdfc601e525-kube-api-access-8tnpk\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.055983 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056092 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056226 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056311 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056425 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056539 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056616 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056697 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056744 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056835 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056906 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056960 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.056934 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.057146 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.057653 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.057819 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.057904 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.057959 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.065551 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.068554 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.068875 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.078211 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tnpk\" (UniqueName: \"kubernetes.io/projected/b130c5b3-ffa1-4624-9e41-5bdfc601e525-kube-api-access-8tnpk\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.187319 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:15 crc kubenswrapper[4743]: I1123 00:32:15.613514 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Nov 23 00:32:16 crc kubenswrapper[4743]: I1123 00:32:16.631461 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerStarted","Data":"6ae323f0bc67d6ac12ca02f5990e88f2a99ae7aa5636517ab4b18d367b8d5ddb"} Nov 23 00:32:16 crc kubenswrapper[4743]: I1123 00:32:16.631774 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerStarted","Data":"9490655bb03ca540903afb2dc7b8e1e04ee03371130dcf0d0d5471cf9b570c55"} Nov 23 00:32:17 crc kubenswrapper[4743]: I1123 00:32:17.644216 4743 generic.go:334] "Generic (PLEG): container finished" podID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerID="6ae323f0bc67d6ac12ca02f5990e88f2a99ae7aa5636517ab4b18d367b8d5ddb" exitCode=0 Nov 23 00:32:17 crc kubenswrapper[4743]: I1123 00:32:17.644392 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerDied","Data":"6ae323f0bc67d6ac12ca02f5990e88f2a99ae7aa5636517ab4b18d367b8d5ddb"} Nov 23 00:32:18 crc kubenswrapper[4743]: I1123 00:32:18.653171 4743 generic.go:334] "Generic (PLEG): container finished" podID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerID="b910a3811d3ef53391946c4feeafaa3e0fa28756672de69c355e21328e08d3ce" exitCode=0 Nov 23 00:32:18 crc kubenswrapper[4743]: I1123 00:32:18.653247 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerDied","Data":"b910a3811d3ef53391946c4feeafaa3e0fa28756672de69c355e21328e08d3ce"} Nov 23 00:32:18 crc kubenswrapper[4743]: I1123 00:32:18.704304 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b130c5b3-ffa1-4624-9e41-5bdfc601e525/manage-dockerfile/0.log" Nov 23 00:32:19 crc kubenswrapper[4743]: I1123 00:32:19.662873 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerStarted","Data":"d9ea3d7789249f2174ea3c8327eadc8e4c43a48a4df4d1fa5214c68aaed94dd8"} Nov 23 00:32:19 crc kubenswrapper[4743]: I1123 00:32:19.700931 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.700901082 podStartE2EDuration="5.700901082s" podCreationTimestamp="2025-11-23 00:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:32:19.694477576 +0000 UTC m=+1531.772575713" watchObservedRunningTime="2025-11-23 00:32:19.700901082 +0000 UTC m=+1531.778999219" Nov 23 00:32:21 crc kubenswrapper[4743]: I1123 00:32:21.687511 4743 generic.go:334] "Generic (PLEG): container finished" podID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerID="d9ea3d7789249f2174ea3c8327eadc8e4c43a48a4df4d1fa5214c68aaed94dd8" exitCode=0 Nov 23 00:32:21 crc kubenswrapper[4743]: I1123 00:32:21.687531 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerDied","Data":"d9ea3d7789249f2174ea3c8327eadc8e4c43a48a4df4d1fa5214c68aaed94dd8"} Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.019311 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096000 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-system-configs\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096076 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-ca-bundles\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096097 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-node-pullsecrets\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096726 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096776 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-root\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096746 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096806 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-pull\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.096816 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.102851 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-push\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.102953 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-run\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.103073 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tnpk\" (UniqueName: \"kubernetes.io/projected/b130c5b3-ffa1-4624-9e41-5bdfc601e525-kube-api-access-8tnpk\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.103141 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildworkdir\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.103546 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.104544 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.104659 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-proxy-ca-bundles\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.104943 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.105732 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.105863 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-blob-cache\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.106844 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.107211 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b130c5b3-ffa1-4624-9e41-5bdfc601e525-kube-api-access-8tnpk" (OuterVolumeSpecName: "kube-api-access-8tnpk") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "kube-api-access-8tnpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.108617 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.108827 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildcachedir\") pod \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\" (UID: \"b130c5b3-ffa1-4624-9e41-5bdfc601e525\") " Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.108907 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.109747 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110066 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110082 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110098 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110115 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110130 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110145 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tnpk\" (UniqueName: \"kubernetes.io/projected/b130c5b3-ffa1-4624-9e41-5bdfc601e525-kube-api-access-8tnpk\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110161 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110177 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110194 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b130c5b3-ffa1-4624-9e41-5bdfc601e525-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.110211 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b130c5b3-ffa1-4624-9e41-5bdfc601e525-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.112675 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "b130c5b3-ffa1-4624-9e41-5bdfc601e525" (UID: "b130c5b3-ffa1-4624-9e41-5bdfc601e525"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.211726 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/b130c5b3-ffa1-4624-9e41-5bdfc601e525-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.703365 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b130c5b3-ffa1-4624-9e41-5bdfc601e525","Type":"ContainerDied","Data":"9490655bb03ca540903afb2dc7b8e1e04ee03371130dcf0d0d5471cf9b570c55"} Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.703415 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9490655bb03ca540903afb2dc7b8e1e04ee03371130dcf0d0d5471cf9b570c55" Nov 23 00:32:23 crc kubenswrapper[4743]: I1123 00:32:23.703419 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.117945 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Nov 23 00:32:41 crc kubenswrapper[4743]: E1123 00:32:41.118667 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="git-clone" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.118680 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="git-clone" Nov 23 00:32:41 crc kubenswrapper[4743]: E1123 00:32:41.118693 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="manage-dockerfile" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.118699 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="manage-dockerfile" Nov 23 00:32:41 crc kubenswrapper[4743]: E1123 00:32:41.118708 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="docker-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.118714 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="docker-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.118805 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="b130c5b3-ffa1-4624-9e41-5bdfc601e525" containerName="docker-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.120218 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.122848 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.123700 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8jg6l" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.123701 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.124276 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.124410 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126323 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126399 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126446 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126512 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126576 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggws\" (UniqueName: \"kubernetes.io/projected/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-kube-api-access-pggws\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126660 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126702 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126750 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126793 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126826 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126875 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126909 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.126950 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.153259 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227677 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227714 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227737 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227757 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227778 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227814 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227839 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227865 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227939 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.227955 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228012 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228051 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228088 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228128 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggws\" (UniqueName: \"kubernetes.io/projected/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-kube-api-access-pggws\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228379 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228530 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228529 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228734 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.228999 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.229039 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.229097 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.229147 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.233982 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.234434 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.235072 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.249866 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggws\" (UniqueName: \"kubernetes.io/projected/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-kube-api-access-pggws\") pod \"service-telemetry-framework-index-1-build\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.447567 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.707253 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Nov 23 00:32:41 crc kubenswrapper[4743]: I1123 00:32:41.869095 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerStarted","Data":"d1043580759abe1bc2fd4fbcfea9fa5909708c90103653cb4b1e0f1ca50a01b5"} Nov 23 00:32:42 crc kubenswrapper[4743]: I1123 00:32:42.878805 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerStarted","Data":"44122bf17e9589b3a9ba7831fd8cc3bfd9f653df3585a912e17519af7ba8a272"} Nov 23 00:32:43 crc kubenswrapper[4743]: I1123 00:32:43.888315 4743 generic.go:334] "Generic (PLEG): container finished" podID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerID="44122bf17e9589b3a9ba7831fd8cc3bfd9f653df3585a912e17519af7ba8a272" exitCode=0 Nov 23 00:32:43 crc kubenswrapper[4743]: I1123 00:32:43.889359 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerDied","Data":"44122bf17e9589b3a9ba7831fd8cc3bfd9f653df3585a912e17519af7ba8a272"} Nov 23 00:32:44 crc kubenswrapper[4743]: I1123 00:32:44.899655 4743 generic.go:334] "Generic (PLEG): container finished" podID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerID="a99ae9a3b4a6d53f4392484751c6c2ee553cc422bf8ac080b39381a8d5b41aa8" exitCode=0 Nov 23 00:32:44 crc kubenswrapper[4743]: I1123 00:32:44.899708 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerDied","Data":"a99ae9a3b4a6d53f4392484751c6c2ee553cc422bf8ac080b39381a8d5b41aa8"} Nov 23 00:32:44 crc kubenswrapper[4743]: I1123 00:32:44.945545 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_ae592b86-1a47-459a-8f08-0a9f8cbb4c6a/manage-dockerfile/0.log" Nov 23 00:32:45 crc kubenswrapper[4743]: I1123 00:32:45.909053 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerStarted","Data":"0df1eb8eaf48552bf08b8dcc731a0b0929fc23188776d66ec6bedfe18809d2d2"} Nov 23 00:32:45 crc kubenswrapper[4743]: I1123 00:32:45.936743 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=4.936714279 podStartE2EDuration="4.936714279s" podCreationTimestamp="2025-11-23 00:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:32:45.932304362 +0000 UTC m=+1558.010402509" watchObservedRunningTime="2025-11-23 00:32:45.936714279 +0000 UTC m=+1558.014812406" Nov 23 00:32:53 crc kubenswrapper[4743]: I1123 00:32:53.690291 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:32:53 crc kubenswrapper[4743]: I1123 00:32:53.690798 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:33:21 crc kubenswrapper[4743]: I1123 00:33:21.217507 4743 generic.go:334] "Generic (PLEG): container finished" podID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerID="0df1eb8eaf48552bf08b8dcc731a0b0929fc23188776d66ec6bedfe18809d2d2" exitCode=0 Nov 23 00:33:21 crc kubenswrapper[4743]: I1123 00:33:21.217543 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerDied","Data":"0df1eb8eaf48552bf08b8dcc731a0b0929fc23188776d66ec6bedfe18809d2d2"} Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.576124 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735066 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-ca-bundles\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735390 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-push\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735462 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildworkdir\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735498 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-pull\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735517 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-blob-cache\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735547 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-system-configs\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735570 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-run\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735587 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-proxy-ca-bundles\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735611 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735656 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggws\" (UniqueName: \"kubernetes.io/projected/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-kube-api-access-pggws\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735695 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-root\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735766 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-node-pullsecrets\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.735788 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildcachedir\") pod \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\" (UID: \"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a\") " Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.736010 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.736155 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.736203 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.736312 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.736930 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.737040 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.737686 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.741707 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-kube-api-access-pggws" (OuterVolumeSpecName: "kube-api-access-pggws") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "kube-api-access-pggws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.742199 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-push" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-push") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "builder-dockercfg-8jg6l-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.743471 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-pull" (OuterVolumeSpecName: "builder-dockercfg-8jg6l-pull") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "builder-dockercfg-8jg6l-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.744825 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836893 4743 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836938 4743 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836950 4743 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836963 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-push\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-push\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836979 4743 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836988 4743 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8jg6l-pull\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-builder-dockercfg-8jg6l-pull\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.836998 4743 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.837009 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.837018 4743 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.837028 4743 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.837040 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggws\" (UniqueName: \"kubernetes.io/projected/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-kube-api-access-pggws\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:22 crc kubenswrapper[4743]: I1123 00:33:22.941113 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:23 crc kubenswrapper[4743]: I1123 00:33:23.039121 4743 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:23 crc kubenswrapper[4743]: I1123 00:33:23.240204 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"ae592b86-1a47-459a-8f08-0a9f8cbb4c6a","Type":"ContainerDied","Data":"d1043580759abe1bc2fd4fbcfea9fa5909708c90103653cb4b1e0f1ca50a01b5"} Nov 23 00:33:23 crc kubenswrapper[4743]: I1123 00:33:23.240542 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1043580759abe1bc2fd4fbcfea9fa5909708c90103653cb4b1e0f1ca50a01b5" Nov 23 00:33:23 crc kubenswrapper[4743]: I1123 00:33:23.240316 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Nov 23 00:33:23 crc kubenswrapper[4743]: I1123 00:33:23.691838 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:33:23 crc kubenswrapper[4743]: I1123 00:33:23.691956 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.715440 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-95gvc"] Nov 23 00:33:24 crc kubenswrapper[4743]: E1123 00:33:24.716466 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="docker-build" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.716515 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="docker-build" Nov 23 00:33:24 crc kubenswrapper[4743]: E1123 00:33:24.716539 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="git-clone" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.716553 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="git-clone" Nov 23 00:33:24 crc kubenswrapper[4743]: E1123 00:33:24.716664 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="manage-dockerfile" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.718506 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="manage-dockerfile" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.718717 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" containerName="docker-build" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.719329 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.722953 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-operators-dockercfg-bq5ld" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.732838 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-95gvc"] Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.764383 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzwv\" (UniqueName: \"kubernetes.io/projected/29709fe8-c14c-4df5-a9df-0542bd487213-kube-api-access-sgzwv\") pod \"service-telemetry-framework-operators-95gvc\" (UID: \"29709fe8-c14c-4df5-a9df-0542bd487213\") " pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.865986 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzwv\" (UniqueName: \"kubernetes.io/projected/29709fe8-c14c-4df5-a9df-0542bd487213-kube-api-access-sgzwv\") pod \"service-telemetry-framework-operators-95gvc\" (UID: \"29709fe8-c14c-4df5-a9df-0542bd487213\") " pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:24 crc kubenswrapper[4743]: I1123 00:33:24.888709 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzwv\" (UniqueName: \"kubernetes.io/projected/29709fe8-c14c-4df5-a9df-0542bd487213-kube-api-access-sgzwv\") pod \"service-telemetry-framework-operators-95gvc\" (UID: \"29709fe8-c14c-4df5-a9df-0542bd487213\") " pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:25 crc kubenswrapper[4743]: I1123 00:33:25.038294 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:25 crc kubenswrapper[4743]: I1123 00:33:25.258161 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-95gvc"] Nov 23 00:33:25 crc kubenswrapper[4743]: I1123 00:33:25.440182 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a" (UID: "ae592b86-1a47-459a-8f08-0a9f8cbb4c6a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:25 crc kubenswrapper[4743]: I1123 00:33:25.475555 4743 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ae592b86-1a47-459a-8f08-0a9f8cbb4c6a-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:26 crc kubenswrapper[4743]: I1123 00:33:26.274359 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-95gvc" event={"ID":"29709fe8-c14c-4df5-a9df-0542bd487213","Type":"ContainerStarted","Data":"ab9c9f72f1a837498b0589919aff25bcd42d2971ad8fa3f1efad5e1e38d2785b"} Nov 23 00:33:29 crc kubenswrapper[4743]: I1123 00:33:29.304354 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-95gvc"] Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.123013 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-j5qlt"] Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.124576 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.132260 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-j5qlt"] Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.246901 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpt4q\" (UniqueName: \"kubernetes.io/projected/8a6f6934-929f-4742-a636-96cc1c344ad7-kube-api-access-wpt4q\") pod \"service-telemetry-framework-operators-j5qlt\" (UID: \"8a6f6934-929f-4742-a636-96cc1c344ad7\") " pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.350104 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpt4q\" (UniqueName: \"kubernetes.io/projected/8a6f6934-929f-4742-a636-96cc1c344ad7-kube-api-access-wpt4q\") pod \"service-telemetry-framework-operators-j5qlt\" (UID: \"8a6f6934-929f-4742-a636-96cc1c344ad7\") " pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.380720 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpt4q\" (UniqueName: \"kubernetes.io/projected/8a6f6934-929f-4742-a636-96cc1c344ad7-kube-api-access-wpt4q\") pod \"service-telemetry-framework-operators-j5qlt\" (UID: \"8a6f6934-929f-4742-a636-96cc1c344ad7\") " pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:30 crc kubenswrapper[4743]: I1123 00:33:30.447878 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.007518 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-j5qlt"] Nov 23 00:33:37 crc kubenswrapper[4743]: W1123 00:33:37.026868 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6f6934_929f_4742_a636_96cc1c344ad7.slice/crio-6d4be2dd9dc291ecad21517df3ba43f5e29822047e694807d144195dbb3e6755 WatchSource:0}: Error finding container 6d4be2dd9dc291ecad21517df3ba43f5e29822047e694807d144195dbb3e6755: Status 404 returned error can't find the container with id 6d4be2dd9dc291ecad21517df3ba43f5e29822047e694807d144195dbb3e6755 Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.346087 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" event={"ID":"8a6f6934-929f-4742-a636-96cc1c344ad7","Type":"ContainerStarted","Data":"d80a600ba08c31b904a5801c9c516a0fbb2b6f4c811f1b12c2a8052440a24620"} Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.346406 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" event={"ID":"8a6f6934-929f-4742-a636-96cc1c344ad7","Type":"ContainerStarted","Data":"6d4be2dd9dc291ecad21517df3ba43f5e29822047e694807d144195dbb3e6755"} Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.347777 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-95gvc" event={"ID":"29709fe8-c14c-4df5-a9df-0542bd487213","Type":"ContainerStarted","Data":"408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2"} Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.347853 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-95gvc" podUID="29709fe8-c14c-4df5-a9df-0542bd487213" containerName="registry-server" containerID="cri-o://408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2" gracePeriod=2 Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.371533 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" podStartSLOduration=7.269342868 podStartE2EDuration="7.371519527s" podCreationTimestamp="2025-11-23 00:33:30 +0000 UTC" firstStartedPulling="2025-11-23 00:33:37.033187236 +0000 UTC m=+1609.111285363" lastFinishedPulling="2025-11-23 00:33:37.135363905 +0000 UTC m=+1609.213462022" observedRunningTime="2025-11-23 00:33:37.368200836 +0000 UTC m=+1609.446298963" watchObservedRunningTime="2025-11-23 00:33:37.371519527 +0000 UTC m=+1609.449617654" Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.727292 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.854352 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgzwv\" (UniqueName: \"kubernetes.io/projected/29709fe8-c14c-4df5-a9df-0542bd487213-kube-api-access-sgzwv\") pod \"29709fe8-c14c-4df5-a9df-0542bd487213\" (UID: \"29709fe8-c14c-4df5-a9df-0542bd487213\") " Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.859561 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29709fe8-c14c-4df5-a9df-0542bd487213-kube-api-access-sgzwv" (OuterVolumeSpecName: "kube-api-access-sgzwv") pod "29709fe8-c14c-4df5-a9df-0542bd487213" (UID: "29709fe8-c14c-4df5-a9df-0542bd487213"). InnerVolumeSpecName "kube-api-access-sgzwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:33:37 crc kubenswrapper[4743]: I1123 00:33:37.962075 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgzwv\" (UniqueName: \"kubernetes.io/projected/29709fe8-c14c-4df5-a9df-0542bd487213-kube-api-access-sgzwv\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.355461 4743 generic.go:334] "Generic (PLEG): container finished" podID="29709fe8-c14c-4df5-a9df-0542bd487213" containerID="408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2" exitCode=0 Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.355523 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-95gvc" Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.355540 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-95gvc" event={"ID":"29709fe8-c14c-4df5-a9df-0542bd487213","Type":"ContainerDied","Data":"408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2"} Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.355887 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-95gvc" event={"ID":"29709fe8-c14c-4df5-a9df-0542bd487213","Type":"ContainerDied","Data":"ab9c9f72f1a837498b0589919aff25bcd42d2971ad8fa3f1efad5e1e38d2785b"} Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.355904 4743 scope.go:117] "RemoveContainer" containerID="408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2" Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.383373 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-95gvc"] Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.386726 4743 scope.go:117] "RemoveContainer" containerID="408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2" Nov 23 00:33:38 crc kubenswrapper[4743]: E1123 00:33:38.387090 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2\": container with ID starting with 408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2 not found: ID does not exist" containerID="408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2" Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.387125 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2"} err="failed to get container status \"408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2\": rpc error: code = NotFound desc = could not find container \"408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2\": container with ID starting with 408b30e310fc34b2dcc491264829a917fa6e9dcd90d07182b5fb492812f343f2 not found: ID does not exist" Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.388258 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-95gvc"] Nov 23 00:33:38 crc kubenswrapper[4743]: I1123 00:33:38.728850 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29709fe8-c14c-4df5-a9df-0542bd487213" path="/var/lib/kubelet/pods/29709fe8-c14c-4df5-a9df-0542bd487213/volumes" Nov 23 00:33:40 crc kubenswrapper[4743]: I1123 00:33:40.449107 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:40 crc kubenswrapper[4743]: I1123 00:33:40.449346 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:40 crc kubenswrapper[4743]: I1123 00:33:40.481971 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:42 crc kubenswrapper[4743]: I1123 00:33:42.419336 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-j5qlt" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.393000 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6"] Nov 23 00:33:44 crc kubenswrapper[4743]: E1123 00:33:44.393558 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29709fe8-c14c-4df5-a9df-0542bd487213" containerName="registry-server" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.393574 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="29709fe8-c14c-4df5-a9df-0542bd487213" containerName="registry-server" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.393713 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="29709fe8-c14c-4df5-a9df-0542bd487213" containerName="registry-server" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.394816 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.433295 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6"] Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.451094 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqpsb\" (UniqueName: \"kubernetes.io/projected/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-kube-api-access-pqpsb\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.451218 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.451267 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.553005 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqpsb\" (UniqueName: \"kubernetes.io/projected/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-kube-api-access-pqpsb\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.553301 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.553404 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.554022 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.554247 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.578586 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqpsb\" (UniqueName: \"kubernetes.io/projected/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-kube-api-access-pqpsb\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.729687 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:44 crc kubenswrapper[4743]: I1123 00:33:44.943948 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6"] Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.373241 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z"] Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.374424 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.395782 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z"] Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.425011 4743 generic.go:334] "Generic (PLEG): container finished" podID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerID="0337c8f04f14a5d0e2886d27c465d251d1da996bb1490b2beb453a0901b16369" exitCode=0 Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.425349 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" event={"ID":"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93","Type":"ContainerDied","Data":"0337c8f04f14a5d0e2886d27c465d251d1da996bb1490b2beb453a0901b16369"} Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.425501 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" event={"ID":"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93","Type":"ContainerStarted","Data":"562f2c8dbd4155b2b09661142a58b4f14243aa3e4125561e77f58b7517ec6802"} Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.467912 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.467989 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7rq\" (UniqueName: \"kubernetes.io/projected/49b34f3c-5969-488b-a1fc-b02c63909a27-kube-api-access-7g7rq\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.468013 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.569291 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.569586 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7rq\" (UniqueName: \"kubernetes.io/projected/49b34f3c-5969-488b-a1fc-b02c63909a27-kube-api-access-7g7rq\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.569637 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.570146 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.570171 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.598905 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7rq\" (UniqueName: \"kubernetes.io/projected/49b34f3c-5969-488b-a1fc-b02c63909a27-kube-api-access-7g7rq\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:45 crc kubenswrapper[4743]: I1123 00:33:45.691787 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:46 crc kubenswrapper[4743]: I1123 00:33:46.039672 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z"] Nov 23 00:33:46 crc kubenswrapper[4743]: I1123 00:33:46.443331 4743 generic.go:334] "Generic (PLEG): container finished" podID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerID="3ef95e172cdd43ccbebbc75d495b40809f6f5bf20f4143da1a00215e8979bb2f" exitCode=0 Nov 23 00:33:46 crc kubenswrapper[4743]: I1123 00:33:46.443447 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" event={"ID":"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93","Type":"ContainerDied","Data":"3ef95e172cdd43ccbebbc75d495b40809f6f5bf20f4143da1a00215e8979bb2f"} Nov 23 00:33:46 crc kubenswrapper[4743]: I1123 00:33:46.445592 4743 generic.go:334] "Generic (PLEG): container finished" podID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerID="302abf762a216988ca12d9b3782c3d9b60c2d3e8f642a25feae6435091939219" exitCode=0 Nov 23 00:33:46 crc kubenswrapper[4743]: I1123 00:33:46.445663 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" event={"ID":"49b34f3c-5969-488b-a1fc-b02c63909a27","Type":"ContainerDied","Data":"302abf762a216988ca12d9b3782c3d9b60c2d3e8f642a25feae6435091939219"} Nov 23 00:33:46 crc kubenswrapper[4743]: I1123 00:33:46.445685 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" event={"ID":"49b34f3c-5969-488b-a1fc-b02c63909a27","Type":"ContainerStarted","Data":"97f679cab3eb3f7d83c8df6ccf6eb815daa312b4734369d2a69a9f037e9c772d"} Nov 23 00:33:47 crc kubenswrapper[4743]: I1123 00:33:47.453183 4743 generic.go:334] "Generic (PLEG): container finished" podID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerID="f8696a2a48d8a08877e0cef37a5153db0a2413b721cae14c4a71ee52b28cb523" exitCode=0 Nov 23 00:33:47 crc kubenswrapper[4743]: I1123 00:33:47.453460 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" event={"ID":"49b34f3c-5969-488b-a1fc-b02c63909a27","Type":"ContainerDied","Data":"f8696a2a48d8a08877e0cef37a5153db0a2413b721cae14c4a71ee52b28cb523"} Nov 23 00:33:47 crc kubenswrapper[4743]: I1123 00:33:47.456930 4743 generic.go:334] "Generic (PLEG): container finished" podID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerID="595077c079c3cc93711aa159b0878aab67c9c1e7dd4c7a07b56d9e7c6718bf07" exitCode=0 Nov 23 00:33:47 crc kubenswrapper[4743]: I1123 00:33:47.456956 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" event={"ID":"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93","Type":"ContainerDied","Data":"595077c079c3cc93711aa159b0878aab67c9c1e7dd4c7a07b56d9e7c6718bf07"} Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.467803 4743 generic.go:334] "Generic (PLEG): container finished" podID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerID="cf2b3f817715cdc8798a0636d2320643402ed7e17e016574f8ed3541150e919b" exitCode=0 Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.468302 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" event={"ID":"49b34f3c-5969-488b-a1fc-b02c63909a27","Type":"ContainerDied","Data":"cf2b3f817715cdc8798a0636d2320643402ed7e17e016574f8ed3541150e919b"} Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.856448 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.917999 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-bundle\") pod \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.918035 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-util\") pod \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.918072 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqpsb\" (UniqueName: \"kubernetes.io/projected/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-kube-api-access-pqpsb\") pod \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\" (UID: \"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93\") " Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.919158 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-bundle" (OuterVolumeSpecName: "bundle") pod "e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" (UID: "e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.922923 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-kube-api-access-pqpsb" (OuterVolumeSpecName: "kube-api-access-pqpsb") pod "e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" (UID: "e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93"). InnerVolumeSpecName "kube-api-access-pqpsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:33:48 crc kubenswrapper[4743]: I1123 00:33:48.934884 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-util" (OuterVolumeSpecName: "util") pod "e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" (UID: "e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.019750 4743 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-util\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.019785 4743 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.019794 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqpsb\" (UniqueName: \"kubernetes.io/projected/e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93-kube-api-access-pqpsb\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.476230 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.476303 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09d9cv6" event={"ID":"e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93","Type":"ContainerDied","Data":"562f2c8dbd4155b2b09661142a58b4f14243aa3e4125561e77f58b7517ec6802"} Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.476345 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="562f2c8dbd4155b2b09661142a58b4f14243aa3e4125561e77f58b7517ec6802" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.726786 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.831675 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g7rq\" (UniqueName: \"kubernetes.io/projected/49b34f3c-5969-488b-a1fc-b02c63909a27-kube-api-access-7g7rq\") pod \"49b34f3c-5969-488b-a1fc-b02c63909a27\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.832358 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-bundle\") pod \"49b34f3c-5969-488b-a1fc-b02c63909a27\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.832449 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-util\") pod \"49b34f3c-5969-488b-a1fc-b02c63909a27\" (UID: \"49b34f3c-5969-488b-a1fc-b02c63909a27\") " Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.833152 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-bundle" (OuterVolumeSpecName: "bundle") pod "49b34f3c-5969-488b-a1fc-b02c63909a27" (UID: "49b34f3c-5969-488b-a1fc-b02c63909a27"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.845294 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b34f3c-5969-488b-a1fc-b02c63909a27-kube-api-access-7g7rq" (OuterVolumeSpecName: "kube-api-access-7g7rq") pod "49b34f3c-5969-488b-a1fc-b02c63909a27" (UID: "49b34f3c-5969-488b-a1fc-b02c63909a27"). InnerVolumeSpecName "kube-api-access-7g7rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.849073 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-util" (OuterVolumeSpecName: "util") pod "49b34f3c-5969-488b-a1fc-b02c63909a27" (UID: "49b34f3c-5969-488b-a1fc-b02c63909a27"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.934749 4743 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-util\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.934782 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g7rq\" (UniqueName: \"kubernetes.io/projected/49b34f3c-5969-488b-a1fc-b02c63909a27-kube-api-access-7g7rq\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:49 crc kubenswrapper[4743]: I1123 00:33:49.934795 4743 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49b34f3c-5969-488b-a1fc-b02c63909a27-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 00:33:50 crc kubenswrapper[4743]: I1123 00:33:50.485210 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" event={"ID":"49b34f3c-5969-488b-a1fc-b02c63909a27","Type":"ContainerDied","Data":"97f679cab3eb3f7d83c8df6ccf6eb815daa312b4734369d2a69a9f037e9c772d"} Nov 23 00:33:50 crc kubenswrapper[4743]: I1123 00:33:50.485776 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f679cab3eb3f7d83c8df6ccf6eb815daa312b4734369d2a69a9f037e9c772d" Nov 23 00:33:50 crc kubenswrapper[4743]: I1123 00:33:50.485313 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a5nz9z" Nov 23 00:33:53 crc kubenswrapper[4743]: I1123 00:33:53.691024 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:33:53 crc kubenswrapper[4743]: I1123 00:33:53.691482 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:33:53 crc kubenswrapper[4743]: I1123 00:33:53.691551 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:33:53 crc kubenswrapper[4743]: I1123 00:33:53.692082 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:33:53 crc kubenswrapper[4743]: I1123 00:33:53.692155 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" gracePeriod=600 Nov 23 00:33:53 crc kubenswrapper[4743]: E1123 00:33:53.828741 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.514799 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" exitCode=0 Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.514892 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15"} Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.515976 4743 scope.go:117] "RemoveContainer" containerID="9dcd1bca2c6fe5058dfd781e9013fb345a66f90c8bac7c0726c518f47bebe38e" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.516550 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.516784 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531360 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp"] Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.531587 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="extract" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531598 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="extract" Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.531614 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="util" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531619 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="util" Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.531631 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="pull" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531637 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="pull" Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.531649 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="util" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531655 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="util" Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.531663 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="pull" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531668 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="pull" Nov 23 00:33:54 crc kubenswrapper[4743]: E1123 00:33:54.531682 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="extract" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531687 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="extract" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531780 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="e763d82b-9e89-4ddd-9ba3-cfdabc1c1c93" containerName="extract" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.531790 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b34f3c-5969-488b-a1fc-b02c63909a27" containerName="extract" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.532180 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.538264 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-tmlx6" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.555571 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp"] Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.600782 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmpf\" (UniqueName: \"kubernetes.io/projected/e9bf573d-e75b-4d63-adc0-c4812688b371-kube-api-access-kfmpf\") pod \"service-telemetry-operator-594cccf5b4-hv9hp\" (UID: \"e9bf573d-e75b-4d63-adc0-c4812688b371\") " pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.600911 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e9bf573d-e75b-4d63-adc0-c4812688b371-runner\") pod \"service-telemetry-operator-594cccf5b4-hv9hp\" (UID: \"e9bf573d-e75b-4d63-adc0-c4812688b371\") " pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.701757 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmpf\" (UniqueName: \"kubernetes.io/projected/e9bf573d-e75b-4d63-adc0-c4812688b371-kube-api-access-kfmpf\") pod \"service-telemetry-operator-594cccf5b4-hv9hp\" (UID: \"e9bf573d-e75b-4d63-adc0-c4812688b371\") " pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.701849 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e9bf573d-e75b-4d63-adc0-c4812688b371-runner\") pod \"service-telemetry-operator-594cccf5b4-hv9hp\" (UID: \"e9bf573d-e75b-4d63-adc0-c4812688b371\") " pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.702369 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/e9bf573d-e75b-4d63-adc0-c4812688b371-runner\") pod \"service-telemetry-operator-594cccf5b4-hv9hp\" (UID: \"e9bf573d-e75b-4d63-adc0-c4812688b371\") " pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.746444 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmpf\" (UniqueName: \"kubernetes.io/projected/e9bf573d-e75b-4d63-adc0-c4812688b371-kube-api-access-kfmpf\") pod \"service-telemetry-operator-594cccf5b4-hv9hp\" (UID: \"e9bf573d-e75b-4d63-adc0-c4812688b371\") " pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:54 crc kubenswrapper[4743]: I1123 00:33:54.848641 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" Nov 23 00:33:55 crc kubenswrapper[4743]: I1123 00:33:55.330897 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp"] Nov 23 00:33:55 crc kubenswrapper[4743]: I1123 00:33:55.523066 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" event={"ID":"e9bf573d-e75b-4d63-adc0-c4812688b371","Type":"ContainerStarted","Data":"8d7db2ebe4f11775e4422f0fc95355ba867617c87a78a51b85742e33f3dc3614"} Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.592451 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv"] Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.593907 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.597026 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-rfdgm" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.604207 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv"] Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.658095 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9dn9\" (UniqueName: \"kubernetes.io/projected/adecaf84-7fbf-45a3-91ee-aee4f96249df-kube-api-access-j9dn9\") pod \"smart-gateway-operator-866b58b8fd-m7dsv\" (UID: \"adecaf84-7fbf-45a3-91ee-aee4f96249df\") " pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.658163 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/adecaf84-7fbf-45a3-91ee-aee4f96249df-runner\") pod \"smart-gateway-operator-866b58b8fd-m7dsv\" (UID: \"adecaf84-7fbf-45a3-91ee-aee4f96249df\") " pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.760641 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9dn9\" (UniqueName: \"kubernetes.io/projected/adecaf84-7fbf-45a3-91ee-aee4f96249df-kube-api-access-j9dn9\") pod \"smart-gateway-operator-866b58b8fd-m7dsv\" (UID: \"adecaf84-7fbf-45a3-91ee-aee4f96249df\") " pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.760785 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/adecaf84-7fbf-45a3-91ee-aee4f96249df-runner\") pod \"smart-gateway-operator-866b58b8fd-m7dsv\" (UID: \"adecaf84-7fbf-45a3-91ee-aee4f96249df\") " pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.762447 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/adecaf84-7fbf-45a3-91ee-aee4f96249df-runner\") pod \"smart-gateway-operator-866b58b8fd-m7dsv\" (UID: \"adecaf84-7fbf-45a3-91ee-aee4f96249df\") " pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.790801 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9dn9\" (UniqueName: \"kubernetes.io/projected/adecaf84-7fbf-45a3-91ee-aee4f96249df-kube-api-access-j9dn9\") pod \"smart-gateway-operator-866b58b8fd-m7dsv\" (UID: \"adecaf84-7fbf-45a3-91ee-aee4f96249df\") " pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:57 crc kubenswrapper[4743]: I1123 00:33:57.913004 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" Nov 23 00:33:58 crc kubenswrapper[4743]: I1123 00:33:58.122061 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv"] Nov 23 00:33:58 crc kubenswrapper[4743]: W1123 00:33:58.131853 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadecaf84_7fbf_45a3_91ee_aee4f96249df.slice/crio-2543a5167b0215dc2783ea18eac4687dc01a011f4af00f47ffe09f2d82728ae8 WatchSource:0}: Error finding container 2543a5167b0215dc2783ea18eac4687dc01a011f4af00f47ffe09f2d82728ae8: Status 404 returned error can't find the container with id 2543a5167b0215dc2783ea18eac4687dc01a011f4af00f47ffe09f2d82728ae8 Nov 23 00:33:58 crc kubenswrapper[4743]: I1123 00:33:58.554416 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" event={"ID":"adecaf84-7fbf-45a3-91ee-aee4f96249df","Type":"ContainerStarted","Data":"2543a5167b0215dc2783ea18eac4687dc01a011f4af00f47ffe09f2d82728ae8"} Nov 23 00:34:05 crc kubenswrapper[4743]: I1123 00:34:05.722164 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:34:05 crc kubenswrapper[4743]: E1123 00:34:05.722801 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:34:16 crc kubenswrapper[4743]: E1123 00:34:16.172964 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/service-telemetry-operator:stable-1.5" Nov 23 00:34:16 crc kubenswrapper[4743]: E1123 00:34:16.173768 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:stable-1.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1763857888,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfmpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-594cccf5b4-hv9hp_service-telemetry(e9bf573d-e75b-4d63-adc0-c4812688b371): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 00:34:16 crc kubenswrapper[4743]: E1123 00:34:16.175000 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" podUID="e9bf573d-e75b-4d63-adc0-c4812688b371" Nov 23 00:34:16 crc kubenswrapper[4743]: I1123 00:34:16.707277 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" event={"ID":"adecaf84-7fbf-45a3-91ee-aee4f96249df","Type":"ContainerStarted","Data":"f72276cf7b0e9d14f2ad9938cd016d2f7b9bca652676f1ffbc83b3c5598aed89"} Nov 23 00:34:16 crc kubenswrapper[4743]: E1123 00:34:16.708859 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:stable-1.5\\\"\"" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" podUID="e9bf573d-e75b-4d63-adc0-c4812688b371" Nov 23 00:34:16 crc kubenswrapper[4743]: I1123 00:34:16.729464 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-866b58b8fd-m7dsv" podStartSLOduration=1.413865886 podStartE2EDuration="19.729443458s" podCreationTimestamp="2025-11-23 00:33:57 +0000 UTC" firstStartedPulling="2025-11-23 00:33:58.134306555 +0000 UTC m=+1630.212404682" lastFinishedPulling="2025-11-23 00:34:16.449884127 +0000 UTC m=+1648.527982254" observedRunningTime="2025-11-23 00:34:16.72625002 +0000 UTC m=+1648.804348187" watchObservedRunningTime="2025-11-23 00:34:16.729443458 +0000 UTC m=+1648.807541595" Nov 23 00:34:20 crc kubenswrapper[4743]: I1123 00:34:20.722177 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:34:20 crc kubenswrapper[4743]: E1123 00:34:20.722814 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:34:31 crc kubenswrapper[4743]: I1123 00:34:31.723212 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:34:31 crc kubenswrapper[4743]: E1123 00:34:31.724351 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:34:31 crc kubenswrapper[4743]: I1123 00:34:31.806586 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" event={"ID":"e9bf573d-e75b-4d63-adc0-c4812688b371","Type":"ContainerStarted","Data":"eb96a67c019d2860263a5ab19602801a9c987ebb65ff94495e2499b997a313aa"} Nov 23 00:34:31 crc kubenswrapper[4743]: I1123 00:34:31.825063 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-594cccf5b4-hv9hp" podStartSLOduration=1.92844295 podStartE2EDuration="37.825044352s" podCreationTimestamp="2025-11-23 00:33:54 +0000 UTC" firstStartedPulling="2025-11-23 00:33:55.339237452 +0000 UTC m=+1627.417335589" lastFinishedPulling="2025-11-23 00:34:31.235838864 +0000 UTC m=+1663.313936991" observedRunningTime="2025-11-23 00:34:31.822965331 +0000 UTC m=+1663.901063548" watchObservedRunningTime="2025-11-23 00:34:31.825044352 +0000 UTC m=+1663.903142479" Nov 23 00:34:46 crc kubenswrapper[4743]: I1123 00:34:46.722570 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:34:46 crc kubenswrapper[4743]: E1123 00:34:46.723216 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.618801 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gltk2"] Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.620372 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.625659 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.625938 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.626127 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.626267 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.626395 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.626676 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.626930 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-jsb66" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.652109 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gltk2"] Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.761574 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.761669 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.761775 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.762013 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-config\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.762114 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.762186 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-users\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.762213 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcvw\" (UniqueName: \"kubernetes.io/projected/4a012c6e-41ee-427c-a507-11683e7bcd41-kube-api-access-gjcvw\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864076 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864131 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864151 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864200 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-config\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864249 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864280 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjcvw\" (UniqueName: \"kubernetes.io/projected/4a012c6e-41ee-427c-a507-11683e7bcd41-kube-api-access-gjcvw\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.864298 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-users\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.866993 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-config\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.875336 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.875671 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.875770 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.876888 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-users\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.891018 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.901434 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjcvw\" (UniqueName: \"kubernetes.io/projected/4a012c6e-41ee-427c-a507-11683e7bcd41-kube-api-access-gjcvw\") pod \"default-interconnect-68864d46cb-gltk2\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:56 crc kubenswrapper[4743]: I1123 00:34:56.961840 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:34:57 crc kubenswrapper[4743]: I1123 00:34:57.429444 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gltk2"] Nov 23 00:34:57 crc kubenswrapper[4743]: W1123 00:34:57.439947 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a012c6e_41ee_427c_a507_11683e7bcd41.slice/crio-ab7d78619e3c7acdb568520125bb9edd890cda163011a5dc29e6dc6358e0c3d4 WatchSource:0}: Error finding container ab7d78619e3c7acdb568520125bb9edd890cda163011a5dc29e6dc6358e0c3d4: Status 404 returned error can't find the container with id ab7d78619e3c7acdb568520125bb9edd890cda163011a5dc29e6dc6358e0c3d4 Nov 23 00:34:58 crc kubenswrapper[4743]: I1123 00:34:58.002846 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" event={"ID":"4a012c6e-41ee-427c-a507-11683e7bcd41","Type":"ContainerStarted","Data":"ab7d78619e3c7acdb568520125bb9edd890cda163011a5dc29e6dc6358e0c3d4"} Nov 23 00:35:00 crc kubenswrapper[4743]: I1123 00:35:00.722393 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:35:00 crc kubenswrapper[4743]: E1123 00:35:00.722892 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:35:03 crc kubenswrapper[4743]: I1123 00:35:03.043677 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" event={"ID":"4a012c6e-41ee-427c-a507-11683e7bcd41","Type":"ContainerStarted","Data":"aac3531e4fc08a56a8fbef67dc7e74002eb2e23e3d94052e7a0b04e419274702"} Nov 23 00:35:03 crc kubenswrapper[4743]: I1123 00:35:03.067623 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" podStartSLOduration=2.354056386 podStartE2EDuration="7.06759507s" podCreationTimestamp="2025-11-23 00:34:56 +0000 UTC" firstStartedPulling="2025-11-23 00:34:57.444933423 +0000 UTC m=+1689.523031540" lastFinishedPulling="2025-11-23 00:35:02.158472097 +0000 UTC m=+1694.236570224" observedRunningTime="2025-11-23 00:35:03.061074251 +0000 UTC m=+1695.139172388" watchObservedRunningTime="2025-11-23 00:35:03.06759507 +0000 UTC m=+1695.145693237" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.691665 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.693861 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.696774 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.696805 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.696774 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.700057 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.700363 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.700641 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-p88d8" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.700826 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.701042 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.717117 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895435 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-config\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895578 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcz5\" (UniqueName: \"kubernetes.io/projected/3f240806-9a77-4dd9-9962-5778151c1902-kube-api-access-nmcz5\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895723 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3f240806-9a77-4dd9-9962-5778151c1902-config-out\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895749 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f240806-9a77-4dd9-9962-5778151c1902-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895795 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895857 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895891 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-web-config\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.895943 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.896097 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3f240806-9a77-4dd9-9962-5778151c1902-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.896128 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3f240806-9a77-4dd9-9962-5778151c1902-tls-assets\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998120 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3f240806-9a77-4dd9-9962-5778151c1902-config-out\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998171 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f240806-9a77-4dd9-9962-5778151c1902-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998193 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998211 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998229 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-web-config\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998251 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998287 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3f240806-9a77-4dd9-9962-5778151c1902-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998303 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3f240806-9a77-4dd9-9962-5778151c1902-tls-assets\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998342 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-config\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: I1123 00:35:06.998365 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmcz5\" (UniqueName: \"kubernetes.io/projected/3f240806-9a77-4dd9-9962-5778151c1902-kube-api-access-nmcz5\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:06 crc kubenswrapper[4743]: E1123 00:35:06.999460 4743 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Nov 23 00:35:06 crc kubenswrapper[4743]: E1123 00:35:06.999543 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls podName:3f240806-9a77-4dd9-9962-5778151c1902 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:07.499522362 +0000 UTC m=+1699.577620489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3f240806-9a77-4dd9-9962-5778151c1902") : secret "default-prometheus-proxy-tls" not found Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.000477 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f240806-9a77-4dd9-9962-5778151c1902-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.001317 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3f240806-9a77-4dd9-9962-5778151c1902-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.004769 4743 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.004818 4743 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da902488e31464a43de07ffb3c3eb43f463606a4ad4f249247c2e4d99b5281ce/globalmount\"" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.005107 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3f240806-9a77-4dd9-9962-5778151c1902-config-out\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.005384 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-config\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.006749 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-web-config\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.014557 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3f240806-9a77-4dd9-9962-5778151c1902-tls-assets\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.016427 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmcz5\" (UniqueName: \"kubernetes.io/projected/3f240806-9a77-4dd9-9962-5778151c1902-kube-api-access-nmcz5\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.024854 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.035960 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6d0526e-6a36-4286-a202-565bdbeaff10\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: I1123 00:35:07.505556 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:07 crc kubenswrapper[4743]: E1123 00:35:07.505894 4743 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Nov 23 00:35:07 crc kubenswrapper[4743]: E1123 00:35:07.505985 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls podName:3f240806-9a77-4dd9-9962-5778151c1902 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:08.505956049 +0000 UTC m=+1700.584054216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3f240806-9a77-4dd9-9962-5778151c1902") : secret "default-prometheus-proxy-tls" not found Nov 23 00:35:08 crc kubenswrapper[4743]: I1123 00:35:08.521621 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:08 crc kubenswrapper[4743]: I1123 00:35:08.528634 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f240806-9a77-4dd9-9962-5778151c1902-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3f240806-9a77-4dd9-9962-5778151c1902\") " pod="service-telemetry/prometheus-default-0" Nov 23 00:35:08 crc kubenswrapper[4743]: I1123 00:35:08.815000 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:09 crc kubenswrapper[4743]: I1123 00:35:09.085574 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 23 00:35:09 crc kubenswrapper[4743]: W1123 00:35:09.107696 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f240806_9a77_4dd9_9962_5778151c1902.slice/crio-727f27c34c3c4f89f6ec81a6850700ea0bfb136d322f7f7472e2231b117b219e WatchSource:0}: Error finding container 727f27c34c3c4f89f6ec81a6850700ea0bfb136d322f7f7472e2231b117b219e: Status 404 returned error can't find the container with id 727f27c34c3c4f89f6ec81a6850700ea0bfb136d322f7f7472e2231b117b219e Nov 23 00:35:09 crc kubenswrapper[4743]: I1123 00:35:09.113184 4743 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 00:35:10 crc kubenswrapper[4743]: I1123 00:35:10.097734 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3f240806-9a77-4dd9-9962-5778151c1902","Type":"ContainerStarted","Data":"727f27c34c3c4f89f6ec81a6850700ea0bfb136d322f7f7472e2231b117b219e"} Nov 23 00:35:13 crc kubenswrapper[4743]: I1123 00:35:13.444370 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3f240806-9a77-4dd9-9962-5778151c1902","Type":"ContainerStarted","Data":"d0298713386c3c213abf8b5919027e6b14a467b61c8a6420650e96df2a2497f1"} Nov 23 00:35:15 crc kubenswrapper[4743]: I1123 00:35:15.721776 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:35:15 crc kubenswrapper[4743]: E1123 00:35:15.723032 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.140083 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-bdpk2"] Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.140907 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.155434 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-bdpk2"] Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.286794 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vtb\" (UniqueName: \"kubernetes.io/projected/21f5036a-6e16-42f6-94c2-d253a043278e-kube-api-access-n7vtb\") pod \"default-snmp-webhook-6856cfb745-bdpk2\" (UID: \"21f5036a-6e16-42f6-94c2-d253a043278e\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.388530 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vtb\" (UniqueName: \"kubernetes.io/projected/21f5036a-6e16-42f6-94c2-d253a043278e-kube-api-access-n7vtb\") pod \"default-snmp-webhook-6856cfb745-bdpk2\" (UID: \"21f5036a-6e16-42f6-94c2-d253a043278e\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.412615 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vtb\" (UniqueName: \"kubernetes.io/projected/21f5036a-6e16-42f6-94c2-d253a043278e-kube-api-access-n7vtb\") pod \"default-snmp-webhook-6856cfb745-bdpk2\" (UID: \"21f5036a-6e16-42f6-94c2-d253a043278e\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.456224 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" Nov 23 00:35:17 crc kubenswrapper[4743]: I1123 00:35:17.659711 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-bdpk2"] Nov 23 00:35:18 crc kubenswrapper[4743]: I1123 00:35:18.489133 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" event={"ID":"21f5036a-6e16-42f6-94c2-d253a043278e","Type":"ContainerStarted","Data":"11ceccc8a9a4f06a640cb7df9ab509eb6bc058619365c2374ebc115fa4637d21"} Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.502662 4743 generic.go:334] "Generic (PLEG): container finished" podID="3f240806-9a77-4dd9-9962-5778151c1902" containerID="d0298713386c3c213abf8b5919027e6b14a467b61c8a6420650e96df2a2497f1" exitCode=0 Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.502733 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3f240806-9a77-4dd9-9962-5778151c1902","Type":"ContainerDied","Data":"d0298713386c3c213abf8b5919027e6b14a467b61c8a6420650e96df2a2497f1"} Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.926272 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.933447 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.936087 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.938028 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.938530 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.938652 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.939581 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.946442 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 23 00:35:20 crc kubenswrapper[4743]: I1123 00:35:20.963076 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-9qv87" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065143 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/17173603-6315-4375-8b4f-75b534bb9af2-tls-assets\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065190 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065216 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-web-config\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065402 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065440 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065469 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/17173603-6315-4375-8b4f-75b534bb9af2-config-out\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065599 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065671 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-config-volume\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.065706 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpmq\" (UniqueName: \"kubernetes.io/projected/17173603-6315-4375-8b4f-75b534bb9af2-kube-api-access-vgpmq\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167544 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpmq\" (UniqueName: \"kubernetes.io/projected/17173603-6315-4375-8b4f-75b534bb9af2-kube-api-access-vgpmq\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167633 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/17173603-6315-4375-8b4f-75b534bb9af2-tls-assets\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167672 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167711 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-web-config\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167800 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167833 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167866 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/17173603-6315-4375-8b4f-75b534bb9af2-config-out\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.167960 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.168016 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-config-volume\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: E1123 00:35:21.168697 4743 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Nov 23 00:35:21 crc kubenswrapper[4743]: E1123 00:35:21.168772 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls podName:17173603-6315-4375-8b4f-75b534bb9af2 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:21.668752251 +0000 UTC m=+1713.746850378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "17173603-6315-4375-8b4f-75b534bb9af2") : secret "default-alertmanager-proxy-tls" not found Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.172245 4743 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.172290 4743 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/93baa1b068b08842d57d5be4fbf6ed31a34a950c5bc4c47e6629a5e8aaf349e0/globalmount\"" pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.173137 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/17173603-6315-4375-8b4f-75b534bb9af2-tls-assets\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.173825 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-config-volume\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.173912 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.177029 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.179504 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-web-config\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.184635 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/17173603-6315-4375-8b4f-75b534bb9af2-config-out\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.185473 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpmq\" (UniqueName: \"kubernetes.io/projected/17173603-6315-4375-8b4f-75b534bb9af2-kube-api-access-vgpmq\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.210212 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a7536a8-2fe4-4615-80fd-b778881c00d6\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: I1123 00:35:21.680499 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:21 crc kubenswrapper[4743]: E1123 00:35:21.680716 4743 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Nov 23 00:35:21 crc kubenswrapper[4743]: E1123 00:35:21.680830 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls podName:17173603-6315-4375-8b4f-75b534bb9af2 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:22.680803566 +0000 UTC m=+1714.758901723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "17173603-6315-4375-8b4f-75b534bb9af2") : secret "default-alertmanager-proxy-tls" not found Nov 23 00:35:22 crc kubenswrapper[4743]: I1123 00:35:22.695212 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:22 crc kubenswrapper[4743]: E1123 00:35:22.695373 4743 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Nov 23 00:35:22 crc kubenswrapper[4743]: E1123 00:35:22.695721 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls podName:17173603-6315-4375-8b4f-75b534bb9af2 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:24.695703671 +0000 UTC m=+1716.773801798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "17173603-6315-4375-8b4f-75b534bb9af2") : secret "default-alertmanager-proxy-tls" not found Nov 23 00:35:24 crc kubenswrapper[4743]: I1123 00:35:24.719937 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:24 crc kubenswrapper[4743]: I1123 00:35:24.737524 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/17173603-6315-4375-8b4f-75b534bb9af2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"17173603-6315-4375-8b4f-75b534bb9af2\") " pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:24 crc kubenswrapper[4743]: I1123 00:35:24.873910 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Nov 23 00:35:25 crc kubenswrapper[4743]: I1123 00:35:25.301904 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 23 00:35:25 crc kubenswrapper[4743]: I1123 00:35:25.558986 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"17173603-6315-4375-8b4f-75b534bb9af2","Type":"ContainerStarted","Data":"bedf694c52b5bccffe185fd75e104801f7ffc8d381337b0627e89f192aac9aac"} Nov 23 00:35:26 crc kubenswrapper[4743]: I1123 00:35:26.570775 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" event={"ID":"21f5036a-6e16-42f6-94c2-d253a043278e","Type":"ContainerStarted","Data":"27cb0122b05b5a469d54b8272ac54486330cb751805d02ccc3e10b8e1de2ffda"} Nov 23 00:35:26 crc kubenswrapper[4743]: I1123 00:35:26.588731 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-bdpk2" podStartSLOduration=1.938146873 podStartE2EDuration="9.588710152s" podCreationTimestamp="2025-11-23 00:35:17 +0000 UTC" firstStartedPulling="2025-11-23 00:35:17.665674934 +0000 UTC m=+1709.743773061" lastFinishedPulling="2025-11-23 00:35:25.316238213 +0000 UTC m=+1717.394336340" observedRunningTime="2025-11-23 00:35:26.584875449 +0000 UTC m=+1718.662973596" watchObservedRunningTime="2025-11-23 00:35:26.588710152 +0000 UTC m=+1718.666808269" Nov 23 00:35:27 crc kubenswrapper[4743]: I1123 00:35:27.722331 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:35:27 crc kubenswrapper[4743]: E1123 00:35:27.722562 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:35:31 crc kubenswrapper[4743]: I1123 00:35:31.602251 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"17173603-6315-4375-8b4f-75b534bb9af2","Type":"ContainerStarted","Data":"2b157c5f21db7e9b5d3f21928c34f448e659d1cee40345e75cfcc1c05c26ceb9"} Nov 23 00:35:33 crc kubenswrapper[4743]: I1123 00:35:33.991435 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn"] Nov 23 00:35:33 crc kubenswrapper[4743]: I1123 00:35:33.992888 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:33 crc kubenswrapper[4743]: I1123 00:35:33.995572 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Nov 23 00:35:33 crc kubenswrapper[4743]: I1123 00:35:33.996345 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Nov 23 00:35:33 crc kubenswrapper[4743]: I1123 00:35:33.996654 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.005317 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-vfdb6" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.016311 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn"] Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.052020 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.052335 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.052426 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ca66272f-dcb2-4bc1-88d8-ed89f3493798-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.052789 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrlx\" (UniqueName: \"kubernetes.io/projected/ca66272f-dcb2-4bc1-88d8-ed89f3493798-kube-api-access-6xrlx\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.052932 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ca66272f-dcb2-4bc1-88d8-ed89f3493798-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.155472 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.156124 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ca66272f-dcb2-4bc1-88d8-ed89f3493798-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.158113 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ca66272f-dcb2-4bc1-88d8-ed89f3493798-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.158337 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrlx\" (UniqueName: \"kubernetes.io/projected/ca66272f-dcb2-4bc1-88d8-ed89f3493798-kube-api-access-6xrlx\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.158462 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ca66272f-dcb2-4bc1-88d8-ed89f3493798-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.158530 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: E1123 00:35:34.158844 4743 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.159833 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ca66272f-dcb2-4bc1-88d8-ed89f3493798-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: E1123 00:35:34.159916 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls podName:ca66272f-dcb2-4bc1-88d8-ed89f3493798 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:34.658888519 +0000 UTC m=+1726.736986656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" (UID: "ca66272f-dcb2-4bc1-88d8-ed89f3493798") : secret "default-cloud1-coll-meter-proxy-tls" not found Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.169185 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.183905 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrlx\" (UniqueName: \"kubernetes.io/projected/ca66272f-dcb2-4bc1-88d8-ed89f3493798-kube-api-access-6xrlx\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.623095 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3f240806-9a77-4dd9-9962-5778151c1902","Type":"ContainerStarted","Data":"3ea104f3593b0e2454b5df5f8bdd0cef463a18d32f6892724227909f99e73ba9"} Nov 23 00:35:34 crc kubenswrapper[4743]: I1123 00:35:34.664686 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:34 crc kubenswrapper[4743]: E1123 00:35:34.664857 4743 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Nov 23 00:35:34 crc kubenswrapper[4743]: E1123 00:35:34.664944 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls podName:ca66272f-dcb2-4bc1-88d8-ed89f3493798 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:35.664927177 +0000 UTC m=+1727.743025304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" (UID: "ca66272f-dcb2-4bc1-88d8-ed89f3493798") : secret "default-cloud1-coll-meter-proxy-tls" not found Nov 23 00:35:35 crc kubenswrapper[4743]: I1123 00:35:35.679948 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:35 crc kubenswrapper[4743]: I1123 00:35:35.685675 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca66272f-dcb2-4bc1-88d8-ed89f3493798-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn\" (UID: \"ca66272f-dcb2-4bc1-88d8-ed89f3493798\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:35 crc kubenswrapper[4743]: I1123 00:35:35.847679 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" Nov 23 00:35:36 crc kubenswrapper[4743]: I1123 00:35:36.263456 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn"] Nov 23 00:35:36 crc kubenswrapper[4743]: W1123 00:35:36.265755 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca66272f_dcb2_4bc1_88d8_ed89f3493798.slice/crio-61e2027dfd5b2009ec4201b1c5fd7b7c4d126a6f4f6fd495c1c53b81a8c2cdc7 WatchSource:0}: Error finding container 61e2027dfd5b2009ec4201b1c5fd7b7c4d126a6f4f6fd495c1c53b81a8c2cdc7: Status 404 returned error can't find the container with id 61e2027dfd5b2009ec4201b1c5fd7b7c4d126a6f4f6fd495c1c53b81a8c2cdc7 Nov 23 00:35:36 crc kubenswrapper[4743]: I1123 00:35:36.635530 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerStarted","Data":"61e2027dfd5b2009ec4201b1c5fd7b7c4d126a6f4f6fd495c1c53b81a8c2cdc7"} Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.037650 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm"] Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.040286 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.054547 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.055182 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.103645 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.103738 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4n65\" (UniqueName: \"kubernetes.io/projected/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-kube-api-access-g4n65\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.103768 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.103825 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.103863 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.124908 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm"] Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.206084 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.206159 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.206190 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.206230 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4n65\" (UniqueName: \"kubernetes.io/projected/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-kube-api-access-g4n65\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.206254 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: E1123 00:35:37.206478 4743 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 23 00:35:37 crc kubenswrapper[4743]: E1123 00:35:37.206609 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls podName:e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a nodeName:}" failed. No retries permitted until 2025-11-23 00:35:37.706580305 +0000 UTC m=+1729.784678432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" (UID: "e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a") : secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.209913 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.210275 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.222556 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.226226 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4n65\" (UniqueName: \"kubernetes.io/projected/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-kube-api-access-g4n65\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.644332 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3f240806-9a77-4dd9-9962-5778151c1902","Type":"ContainerStarted","Data":"f129d5c5f54d9883aa2d9603bfc67c06afdf842ef1603709b8510f65a89c3863"} Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.645709 4743 generic.go:334] "Generic (PLEG): container finished" podID="17173603-6315-4375-8b4f-75b534bb9af2" containerID="2b157c5f21db7e9b5d3f21928c34f448e659d1cee40345e75cfcc1c05c26ceb9" exitCode=0 Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.645738 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"17173603-6315-4375-8b4f-75b534bb9af2","Type":"ContainerDied","Data":"2b157c5f21db7e9b5d3f21928c34f448e659d1cee40345e75cfcc1c05c26ceb9"} Nov 23 00:35:37 crc kubenswrapper[4743]: I1123 00:35:37.714006 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:37 crc kubenswrapper[4743]: E1123 00:35:37.714146 4743 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 23 00:35:37 crc kubenswrapper[4743]: E1123 00:35:37.714203 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls podName:e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a nodeName:}" failed. No retries permitted until 2025-11-23 00:35:38.714186732 +0000 UTC m=+1730.792284859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" (UID: "e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a") : secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 23 00:35:38 crc kubenswrapper[4743]: I1123 00:35:38.729642 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:38 crc kubenswrapper[4743]: I1123 00:35:38.735041 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm\" (UID: \"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:38 crc kubenswrapper[4743]: I1123 00:35:38.883429 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.779174 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn"] Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.781001 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.783350 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.784944 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.795964 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn"] Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.857513 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d12fd46-053c-430f-9586-8fd69219c820-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.857587 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd5l8\" (UniqueName: \"kubernetes.io/projected/8d12fd46-053c-430f-9586-8fd69219c820-kube-api-access-wd5l8\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.857959 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d12fd46-053c-430f-9586-8fd69219c820-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.858044 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.858147 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.959777 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d12fd46-053c-430f-9586-8fd69219c820-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.959859 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.959909 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.959936 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d12fd46-053c-430f-9586-8fd69219c820-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.959957 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd5l8\" (UniqueName: \"kubernetes.io/projected/8d12fd46-053c-430f-9586-8fd69219c820-kube-api-access-wd5l8\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: E1123 00:35:40.960120 4743 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Nov 23 00:35:40 crc kubenswrapper[4743]: E1123 00:35:40.960213 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls podName:8d12fd46-053c-430f-9586-8fd69219c820 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:41.460193666 +0000 UTC m=+1733.538291793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" (UID: "8d12fd46-053c-430f-9586-8fd69219c820") : secret "default-cloud1-sens-meter-proxy-tls" not found Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.960707 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d12fd46-053c-430f-9586-8fd69219c820-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.961038 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d12fd46-053c-430f-9586-8fd69219c820-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.968941 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:40 crc kubenswrapper[4743]: I1123 00:35:40.980407 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd5l8\" (UniqueName: \"kubernetes.io/projected/8d12fd46-053c-430f-9586-8fd69219c820-kube-api-access-wd5l8\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:41 crc kubenswrapper[4743]: I1123 00:35:41.467288 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:41 crc kubenswrapper[4743]: E1123 00:35:41.467426 4743 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Nov 23 00:35:41 crc kubenswrapper[4743]: E1123 00:35:41.467497 4743 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls podName:8d12fd46-053c-430f-9586-8fd69219c820 nodeName:}" failed. No retries permitted until 2025-11-23 00:35:42.467466944 +0000 UTC m=+1734.545565061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" (UID: "8d12fd46-053c-430f-9586-8fd69219c820") : secret "default-cloud1-sens-meter-proxy-tls" not found Nov 23 00:35:41 crc kubenswrapper[4743]: I1123 00:35:41.743751 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:35:41 crc kubenswrapper[4743]: E1123 00:35:41.744101 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:35:42 crc kubenswrapper[4743]: I1123 00:35:42.494281 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:42 crc kubenswrapper[4743]: I1123 00:35:42.498108 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d12fd46-053c-430f-9586-8fd69219c820-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn\" (UID: \"8d12fd46-053c-430f-9586-8fd69219c820\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:42 crc kubenswrapper[4743]: I1123 00:35:42.602389 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.064854 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx"] Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.067790 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.071558 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.072018 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.078071 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx"] Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.211313 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jdh\" (UniqueName: \"kubernetes.io/projected/ffc96728-970f-4f41-afe2-14bb99a1c727-kube-api-access-t6jdh\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.211380 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ffc96728-970f-4f41-afe2-14bb99a1c727-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.211401 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffc96728-970f-4f41-afe2-14bb99a1c727-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.211426 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ffc96728-970f-4f41-afe2-14bb99a1c727-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.313252 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jdh\" (UniqueName: \"kubernetes.io/projected/ffc96728-970f-4f41-afe2-14bb99a1c727-kube-api-access-t6jdh\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.313977 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ffc96728-970f-4f41-afe2-14bb99a1c727-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.314911 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ffc96728-970f-4f41-afe2-14bb99a1c727-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.314935 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffc96728-970f-4f41-afe2-14bb99a1c727-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.314010 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffc96728-970f-4f41-afe2-14bb99a1c727-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.315131 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ffc96728-970f-4f41-afe2-14bb99a1c727-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.329733 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/ffc96728-970f-4f41-afe2-14bb99a1c727-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.329998 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jdh\" (UniqueName: \"kubernetes.io/projected/ffc96728-970f-4f41-afe2-14bb99a1c727-kube-api-access-t6jdh\") pod \"default-cloud1-coll-event-smartgateway-5f89974568-q42fx\" (UID: \"ffc96728-970f-4f41-afe2-14bb99a1c727\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:48 crc kubenswrapper[4743]: I1123 00:35:48.413938 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" Nov 23 00:35:50 crc kubenswrapper[4743]: E1123 00:35:50.689389 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/openshift/origin-oauth-proxy:latest" Nov 23 00:35:50 crc kubenswrapper[4743]: E1123 00:35:50.689750 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:oauth-proxy,Image:quay.io/openshift/origin-oauth-proxy:latest,Command:[],Args:[-https-address=:9092 -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -upstream=http://localhost:9090/ -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=prometheus-stf -openshift-sar={\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"resourceAPIGroup\":\"monitoring.rhobs\", \"verb\":\"get\"} -openshift-delegate-urls={\"/\":{\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"group\":\"monitoring.rhobs\", \"verb\":\"get\"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:9092,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:false,MountPath:/etc/proxy/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmcz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(3f240806-9a77-4dd9-9962-5778151c1902): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:35:50 crc kubenswrapper[4743]: E1123 00:35:50.691159 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="3f240806-9a77-4dd9-9962-5778151c1902" Nov 23 00:35:50 crc kubenswrapper[4743]: E1123 00:35:50.757357 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/openshift/origin-oauth-proxy:latest" Nov 23 00:35:50 crc kubenswrapper[4743]: E1123 00:35:50.757503 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:oauth-proxy,Image:quay.io/openshift/origin-oauth-proxy:latest,Command:[],Args:[-https-address=:8083 -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=smart-gateway -upstream=http://localhost:8081/ -openshift-delegate-urls={\"/\": {\"namespace\": \"service-telemetry\", \"resource\": \"smartgateways\", \"group\": \"smartgateway.infra.watch\", \"verb\": \"get\"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8083,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-cloud1-coll-meter-proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:session-secret,ReadOnly:false,MountPath:/etc/proxy/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xrlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn_service-telemetry(ca66272f-dcb2-4bc1-88d8-ed89f3493798): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:35:50 crc kubenswrapper[4743]: E1123 00:35:50.780301 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="3f240806-9a77-4dd9-9962-5778151c1902" Nov 23 00:35:50 crc kubenswrapper[4743]: I1123 00:35:50.877236 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s"] Nov 23 00:35:50 crc kubenswrapper[4743]: I1123 00:35:50.900616 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s"] Nov 23 00:35:50 crc kubenswrapper[4743]: I1123 00:35:50.900716 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:50 crc kubenswrapper[4743]: I1123 00:35:50.914600 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Nov 23 00:35:50 crc kubenswrapper[4743]: I1123 00:35:50.988911 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn"] Nov 23 00:35:50 crc kubenswrapper[4743]: W1123 00:35:50.999729 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d12fd46_053c_430f_9586_8fd69219c820.slice/crio-4a0949affcad76724624a920318daf3c7dc2c990b23072d49c79806b923e8c1b WatchSource:0}: Error finding container 4a0949affcad76724624a920318daf3c7dc2c990b23072d49c79806b923e8c1b: Status 404 returned error can't find the container with id 4a0949affcad76724624a920318daf3c7dc2c990b23072d49c79806b923e8c1b Nov 23 00:35:51 crc kubenswrapper[4743]: E1123 00:35:51.003505 4743 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/prometheus/alertmanager:latest" Nov 23 00:35:51 crc kubenswrapper[4743]: E1123 00:35:51.003718 4743 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:quay.io/prometheus/alertmanager:latest,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=127.0.0.1:9093 --web.route-prefix=/ --cluster.label=service-telemetry/default --cluster.peer=alertmanager-default-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-default-db,ReadOnly:false,MountPath:/alertmanager,SubPath:alertmanager-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-alertmanager-proxy-tls,ReadOnly:true,MountPath:/etc/alertmanager/secrets/default-alertmanager-proxy-tls,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:true,MountPath:/etc/alertmanager/secrets/default-session-secret,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgpmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-default-0_service-telemetry(17173603-6315-4375-8b4f-75b534bb9af2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.065046 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7zlx\" (UniqueName: \"kubernetes.io/projected/80cd6983-1a53-4718-a4e1-5d0d0d711e49-kube-api-access-h7zlx\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.065528 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/80cd6983-1a53-4718-a4e1-5d0d0d711e49-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.065657 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/80cd6983-1a53-4718-a4e1-5d0d0d711e49-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.065686 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/80cd6983-1a53-4718-a4e1-5d0d0d711e49-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.167395 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7zlx\" (UniqueName: \"kubernetes.io/projected/80cd6983-1a53-4718-a4e1-5d0d0d711e49-kube-api-access-h7zlx\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.167440 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/80cd6983-1a53-4718-a4e1-5d0d0d711e49-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.167515 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/80cd6983-1a53-4718-a4e1-5d0d0d711e49-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.167538 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/80cd6983-1a53-4718-a4e1-5d0d0d711e49-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.167951 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/80cd6983-1a53-4718-a4e1-5d0d0d711e49-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.171373 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/80cd6983-1a53-4718-a4e1-5d0d0d711e49-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.177829 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/80cd6983-1a53-4718-a4e1-5d0d0d711e49-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.184962 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7zlx\" (UniqueName: \"kubernetes.io/projected/80cd6983-1a53-4718-a4e1-5d0d0d711e49-kube-api-access-h7zlx\") pod \"default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s\" (UID: \"80cd6983-1a53-4718-a4e1-5d0d0d711e49\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.239767 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.416595 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm"] Nov 23 00:35:51 crc kubenswrapper[4743]: W1123 00:35:51.425516 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2ea53ae_9c57_4fd9_95f7_f52a8f11fc2a.slice/crio-0e03ccfcdd5e30205e94d8790f37440b510c5b93e3149316f23f696f5f1e4cd5 WatchSource:0}: Error finding container 0e03ccfcdd5e30205e94d8790f37440b510c5b93e3149316f23f696f5f1e4cd5: Status 404 returned error can't find the container with id 0e03ccfcdd5e30205e94d8790f37440b510c5b93e3149316f23f696f5f1e4cd5 Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.473235 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx"] Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.671654 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s"] Nov 23 00:35:51 crc kubenswrapper[4743]: W1123 00:35:51.709651 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80cd6983_1a53_4718_a4e1_5d0d0d711e49.slice/crio-3c03ae546997c1e14f9a1ec78591894f6aedc02cad182e8ffcba6a6f6b7ee6af WatchSource:0}: Error finding container 3c03ae546997c1e14f9a1ec78591894f6aedc02cad182e8ffcba6a6f6b7ee6af: Status 404 returned error can't find the container with id 3c03ae546997c1e14f9a1ec78591894f6aedc02cad182e8ffcba6a6f6b7ee6af Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.791347 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerStarted","Data":"4a0949affcad76724624a920318daf3c7dc2c990b23072d49c79806b923e8c1b"} Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.792672 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerStarted","Data":"0e03ccfcdd5e30205e94d8790f37440b510c5b93e3149316f23f696f5f1e4cd5"} Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.794856 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerStarted","Data":"37e7f48632079a1ab5b6cfa5e74245c7ecefc0a1e52fe296b76cb7a7f3dde723"} Nov 23 00:35:51 crc kubenswrapper[4743]: I1123 00:35:51.796599 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerStarted","Data":"3c03ae546997c1e14f9a1ec78591894f6aedc02cad182e8ffcba6a6f6b7ee6af"} Nov 23 00:35:52 crc kubenswrapper[4743]: I1123 00:35:52.849326 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerStarted","Data":"ef74dc50d4322b81c254723c21db157a6bf91b5f53160073dcb9bb8360aaaa6a"} Nov 23 00:35:52 crc kubenswrapper[4743]: I1123 00:35:52.854202 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"17173603-6315-4375-8b4f-75b534bb9af2","Type":"ContainerStarted","Data":"26d672ad634c2fb0d1a0169733c131fa9ef8d901ef304f1568154806880f00ee"} Nov 23 00:35:52 crc kubenswrapper[4743]: I1123 00:35:52.861324 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerStarted","Data":"25a4b9d6d18425b93dcc089a4c1c7aa95b235d0c129ba79e6baebb8a468bb66b"} Nov 23 00:35:53 crc kubenswrapper[4743]: E1123 00:35:53.387512 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/alertmanager-default-0" podUID="17173603-6315-4375-8b4f-75b534bb9af2" Nov 23 00:35:53 crc kubenswrapper[4743]: I1123 00:35:53.722904 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:35:53 crc kubenswrapper[4743]: E1123 00:35:53.723237 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:35:53 crc kubenswrapper[4743]: I1123 00:35:53.815915 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:53 crc kubenswrapper[4743]: I1123 00:35:53.815989 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:53 crc kubenswrapper[4743]: I1123 00:35:53.863082 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:53 crc kubenswrapper[4743]: I1123 00:35:53.871299 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"17173603-6315-4375-8b4f-75b534bb9af2","Type":"ContainerStarted","Data":"da566205c1d0dc7fccfffbf81711a2e1c50f66081106b679d4189d6ecc9db806"} Nov 23 00:35:53 crc kubenswrapper[4743]: E1123 00:35:53.873248 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/prometheus/alertmanager:latest\\\"\"" pod="service-telemetry/alertmanager-default-0" podUID="17173603-6315-4375-8b4f-75b534bb9af2" Nov 23 00:35:54 crc kubenswrapper[4743]: E1123 00:35:54.879049 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/prometheus/alertmanager:latest\\\"\"" pod="service-telemetry/alertmanager-default-0" podUID="17173603-6315-4375-8b4f-75b534bb9af2" Nov 23 00:35:57 crc kubenswrapper[4743]: E1123 00:35:57.406778 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="3f240806-9a77-4dd9-9962-5778151c1902" Nov 23 00:35:57 crc kubenswrapper[4743]: I1123 00:35:57.903420 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerStarted","Data":"1f798bd09ff055fe7a1c32f4e5e0a14102f36c40cd1d69ecdcd89eaa74f6c6e9"} Nov 23 00:35:57 crc kubenswrapper[4743]: E1123 00:35:57.905825 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="3f240806-9a77-4dd9-9962-5778151c1902" Nov 23 00:35:57 crc kubenswrapper[4743]: I1123 00:35:57.994329 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Nov 23 00:35:58 crc kubenswrapper[4743]: I1123 00:35:58.914762 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerStarted","Data":"59b37d9adb5a20349a939eee9c7f94c745f0dfdf1b0d816f9234cc6b8d9e5fb5"} Nov 23 00:35:58 crc kubenswrapper[4743]: I1123 00:35:58.917572 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerStarted","Data":"0388f1b7ebaea621188beebb87ebd1c6240e36f46a5d1a6a10fa1c8842b9aab7"} Nov 23 00:35:58 crc kubenswrapper[4743]: I1123 00:35:58.928874 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerStarted","Data":"db8995712d823efc767d720db5515a3ff07e6a686373f05fae2174d667f4a924"} Nov 23 00:35:58 crc kubenswrapper[4743]: I1123 00:35:58.932768 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerStarted","Data":"3c21d7babf45053fa00043ee82c1df6891fca196aa9d2af35545c3d7e4274f65"} Nov 23 00:35:58 crc kubenswrapper[4743]: E1123 00:35:58.934530 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="3f240806-9a77-4dd9-9962-5778151c1902" Nov 23 00:36:05 crc kubenswrapper[4743]: I1123 00:36:05.112625 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gltk2"] Nov 23 00:36:05 crc kubenswrapper[4743]: I1123 00:36:05.113834 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" podUID="4a012c6e-41ee-427c-a507-11683e7bcd41" containerName="default-interconnect" containerID="cri-o://aac3531e4fc08a56a8fbef67dc7e74002eb2e23e3d94052e7a0b04e419274702" gracePeriod=30 Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:05.990260 4743 generic.go:334] "Generic (PLEG): container finished" podID="ffc96728-970f-4f41-afe2-14bb99a1c727" containerID="59b37d9adb5a20349a939eee9c7f94c745f0dfdf1b0d816f9234cc6b8d9e5fb5" exitCode=0 Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:05.990878 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerDied","Data":"59b37d9adb5a20349a939eee9c7f94c745f0dfdf1b0d816f9234cc6b8d9e5fb5"} Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:05.992618 4743 generic.go:334] "Generic (PLEG): container finished" podID="4a012c6e-41ee-427c-a507-11683e7bcd41" containerID="aac3531e4fc08a56a8fbef67dc7e74002eb2e23e3d94052e7a0b04e419274702" exitCode=0 Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:05.992681 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" event={"ID":"4a012c6e-41ee-427c-a507-11683e7bcd41","Type":"ContainerDied","Data":"aac3531e4fc08a56a8fbef67dc7e74002eb2e23e3d94052e7a0b04e419274702"} Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:05.995014 4743 generic.go:334] "Generic (PLEG): container finished" podID="8d12fd46-053c-430f-9586-8fd69219c820" containerID="0388f1b7ebaea621188beebb87ebd1c6240e36f46a5d1a6a10fa1c8842b9aab7" exitCode=0 Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:05.995037 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerDied","Data":"0388f1b7ebaea621188beebb87ebd1c6240e36f46a5d1a6a10fa1c8842b9aab7"} Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.174188 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.204772 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t4wtq"] Nov 23 00:36:06 crc kubenswrapper[4743]: E1123 00:36:06.205026 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a012c6e-41ee-427c-a507-11683e7bcd41" containerName="default-interconnect" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.205039 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a012c6e-41ee-427c-a507-11683e7bcd41" containerName="default-interconnect" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.205153 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a012c6e-41ee-427c-a507-11683e7bcd41" containerName="default-interconnect" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.205614 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.218518 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t4wtq"] Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299286 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-users\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299359 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-credentials\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299396 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjcvw\" (UniqueName: \"kubernetes.io/projected/4a012c6e-41ee-427c-a507-11683e7bcd41-kube-api-access-gjcvw\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299443 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-config\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299470 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-ca\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299510 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-ca\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.299556 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-credentials\") pod \"4a012c6e-41ee-427c-a507-11683e7bcd41\" (UID: \"4a012c6e-41ee-427c-a507-11683e7bcd41\") " Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300176 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300496 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300660 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300707 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300758 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-sasl-config\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300871 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300950 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-sasl-users\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.300975 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zj7\" (UniqueName: \"kubernetes.io/projected/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-kube-api-access-f8zj7\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.301548 4743 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.304824 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.305032 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.305355 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a012c6e-41ee-427c-a507-11683e7bcd41-kube-api-access-gjcvw" (OuterVolumeSpecName: "kube-api-access-gjcvw") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "kube-api-access-gjcvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.306895 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.312125 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.319301 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "4a012c6e-41ee-427c-a507-11683e7bcd41" (UID: "4a012c6e-41ee-427c-a507-11683e7bcd41"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402393 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402454 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402474 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402519 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-sasl-config\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402562 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402606 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-sasl-users\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402632 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8zj7\" (UniqueName: \"kubernetes.io/projected/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-kube-api-access-f8zj7\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402686 4743 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402702 4743 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402715 4743 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402728 4743 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-sasl-users\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402740 4743 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4a012c6e-41ee-427c-a507-11683e7bcd41-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.402754 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjcvw\" (UniqueName: \"kubernetes.io/projected/4a012c6e-41ee-427c-a507-11683e7bcd41-kube-api-access-gjcvw\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.403902 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-sasl-config\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.406157 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.406677 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.406716 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-sasl-users\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.413847 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.414366 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.418621 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8zj7\" (UniqueName: \"kubernetes.io/projected/b4f28bbd-2af7-4491-aae2-8fe3dddec07b-kube-api-access-f8zj7\") pod \"default-interconnect-68864d46cb-t4wtq\" (UID: \"b4f28bbd-2af7-4491-aae2-8fe3dddec07b\") " pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.536477 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" Nov 23 00:36:06 crc kubenswrapper[4743]: I1123 00:36:06.725692 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:36:06 crc kubenswrapper[4743]: E1123 00:36:06.725899 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.002052 4743 generic.go:334] "Generic (PLEG): container finished" podID="e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a" containerID="db8995712d823efc767d720db5515a3ff07e6a686373f05fae2174d667f4a924" exitCode=0 Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.002113 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerDied","Data":"db8995712d823efc767d720db5515a3ff07e6a686373f05fae2174d667f4a924"} Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.003645 4743 generic.go:334] "Generic (PLEG): container finished" podID="ca66272f-dcb2-4bc1-88d8-ed89f3493798" containerID="3c21d7babf45053fa00043ee82c1df6891fca196aa9d2af35545c3d7e4274f65" exitCode=0 Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.003682 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerDied","Data":"3c21d7babf45053fa00043ee82c1df6891fca196aa9d2af35545c3d7e4274f65"} Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.005832 4743 generic.go:334] "Generic (PLEG): container finished" podID="80cd6983-1a53-4718-a4e1-5d0d0d711e49" containerID="1f798bd09ff055fe7a1c32f4e5e0a14102f36c40cd1d69ecdcd89eaa74f6c6e9" exitCode=0 Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.005870 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerDied","Data":"1f798bd09ff055fe7a1c32f4e5e0a14102f36c40cd1d69ecdcd89eaa74f6c6e9"} Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.007504 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" event={"ID":"4a012c6e-41ee-427c-a507-11683e7bcd41","Type":"ContainerDied","Data":"ab7d78619e3c7acdb568520125bb9edd890cda163011a5dc29e6dc6358e0c3d4"} Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.007532 4743 scope.go:117] "RemoveContainer" containerID="aac3531e4fc08a56a8fbef67dc7e74002eb2e23e3d94052e7a0b04e419274702" Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.007648 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gltk2" Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.030142 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gltk2"] Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.036387 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gltk2"] Nov 23 00:36:07 crc kubenswrapper[4743]: I1123 00:36:07.590610 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t4wtq"] Nov 23 00:36:07 crc kubenswrapper[4743]: W1123 00:36:07.607196 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4f28bbd_2af7_4491_aae2_8fe3dddec07b.slice/crio-ebe15d5ff7780e78bca77ba15ff1940287cf5d702416bcde60d43f34711b19a6 WatchSource:0}: Error finding container ebe15d5ff7780e78bca77ba15ff1940287cf5d702416bcde60d43f34711b19a6: Status 404 returned error can't find the container with id ebe15d5ff7780e78bca77ba15ff1940287cf5d702416bcde60d43f34711b19a6 Nov 23 00:36:07 crc kubenswrapper[4743]: E1123 00:36:07.714898 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" podUID="ca66272f-dcb2-4bc1-88d8-ed89f3493798" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.015837 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerStarted","Data":"6cb29840cbfd70cd3ca3faddf1bd7dcd630139d49bf7e0ebf929bde971d07360"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.016423 4743 scope.go:117] "RemoveContainer" containerID="db8995712d823efc767d720db5515a3ff07e6a686373f05fae2174d667f4a924" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.019965 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerStarted","Data":"eec2021d7d881e1e8b3984944a3729d460e160fe41a7fb4f1d2f7564df7b0e91"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.020447 4743 scope.go:117] "RemoveContainer" containerID="3c21d7babf45053fa00043ee82c1df6891fca196aa9d2af35545c3d7e4274f65" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.027848 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" event={"ID":"b4f28bbd-2af7-4491-aae2-8fe3dddec07b","Type":"ContainerStarted","Data":"cd13259df450b85936d4907ca917ce87a37160b19df39c5ce69287cedc27dfd8"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.027889 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" event={"ID":"b4f28bbd-2af7-4491-aae2-8fe3dddec07b","Type":"ContainerStarted","Data":"ebe15d5ff7780e78bca77ba15ff1940287cf5d702416bcde60d43f34711b19a6"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.030095 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerStarted","Data":"9cb1096c88f1d4e6786416003c5794e50b6b43411c4d0f00730c20d8e824eb81"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.030805 4743 scope.go:117] "RemoveContainer" containerID="59b37d9adb5a20349a939eee9c7f94c745f0dfdf1b0d816f9234cc6b8d9e5fb5" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.034830 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerStarted","Data":"518a2af4fc8de7680290218a39422032484b216a322867a237e6c073dcd1ac8b"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.035512 4743 scope.go:117] "RemoveContainer" containerID="1f798bd09ff055fe7a1c32f4e5e0a14102f36c40cd1d69ecdcd89eaa74f6c6e9" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.046833 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerStarted","Data":"577d829515cb33597e2c2707e7833371adf9a4466dc8493e7102f9ee03c828c2"} Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.047453 4743 scope.go:117] "RemoveContainer" containerID="0388f1b7ebaea621188beebb87ebd1c6240e36f46a5d1a6a10fa1c8842b9aab7" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.074554 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-t4wtq" podStartSLOduration=3.074532316 podStartE2EDuration="3.074532316s" podCreationTimestamp="2025-11-23 00:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:36:08.06321339 +0000 UTC m=+1760.141311527" watchObservedRunningTime="2025-11-23 00:36:08.074532316 +0000 UTC m=+1760.152630463" Nov 23 00:36:08 crc kubenswrapper[4743]: I1123 00:36:08.741625 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a012c6e-41ee-427c-a507-11683e7bcd41" path="/var/lib/kubelet/pods/4a012c6e-41ee-427c-a507-11683e7bcd41/volumes" Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.059909 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"17173603-6315-4375-8b4f-75b534bb9af2","Type":"ContainerStarted","Data":"53953102d331c9844e62a19be2b1ae548be24c4b5f3d4447bd0a580432867a02"} Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.063419 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerStarted","Data":"70514751dc9cb7b0b61c1a5a379f5ca035c2c873b6dfd1e7c2a709b03319ce78"} Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.070578 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerStarted","Data":"0dfbf993e1b3f2b08e6bc1da7e631ba4404fba3a6ecda187694261fe7d71f4b9"} Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.077499 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerStarted","Data":"fa5aab1c0005a3c21f3642737b4731adb159ce908b3f1cbca3a6db62750f9162"} Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.083627 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerStarted","Data":"a8fc11e47f742ed52c4a2e38a21fbcfbd0fb64f84570f6b0605e7cc4506e0ec5"} Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.085870 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerStarted","Data":"1d30e82bdc043d1794664d4e3e1ad4a1006d7339a058edeaf139a1636a7c1e09"} Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.096812 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=19.279624799 podStartE2EDuration="50.09679574s" podCreationTimestamp="2025-11-23 00:35:19 +0000 UTC" firstStartedPulling="2025-11-23 00:35:37.647264759 +0000 UTC m=+1729.725362886" lastFinishedPulling="2025-11-23 00:36:08.4644357 +0000 UTC m=+1760.542533827" observedRunningTime="2025-11-23 00:36:09.092990137 +0000 UTC m=+1761.171088274" watchObservedRunningTime="2025-11-23 00:36:09.09679574 +0000 UTC m=+1761.174893867" Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.125275 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" podStartSLOduration=15.123998512 podStartE2EDuration="32.125241864s" podCreationTimestamp="2025-11-23 00:35:37 +0000 UTC" firstStartedPulling="2025-11-23 00:35:51.428085671 +0000 UTC m=+1743.506183798" lastFinishedPulling="2025-11-23 00:36:08.429328993 +0000 UTC m=+1760.507427150" observedRunningTime="2025-11-23 00:36:09.124415854 +0000 UTC m=+1761.202513981" watchObservedRunningTime="2025-11-23 00:36:09.125241864 +0000 UTC m=+1761.203340021" Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.150620 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" podStartSLOduration=11.700488597 podStartE2EDuration="29.150600763s" podCreationTimestamp="2025-11-23 00:35:40 +0000 UTC" firstStartedPulling="2025-11-23 00:35:51.013127825 +0000 UTC m=+1743.091225952" lastFinishedPulling="2025-11-23 00:36:08.463239991 +0000 UTC m=+1760.541338118" observedRunningTime="2025-11-23 00:36:09.147501417 +0000 UTC m=+1761.225599554" watchObservedRunningTime="2025-11-23 00:36:09.150600763 +0000 UTC m=+1761.228698890" Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.168638 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" podStartSLOduration=4.101216076 podStartE2EDuration="21.168621403s" podCreationTimestamp="2025-11-23 00:35:48 +0000 UTC" firstStartedPulling="2025-11-23 00:35:51.48295517 +0000 UTC m=+1743.561053297" lastFinishedPulling="2025-11-23 00:36:08.550360497 +0000 UTC m=+1760.628458624" observedRunningTime="2025-11-23 00:36:09.162572935 +0000 UTC m=+1761.240671102" watchObservedRunningTime="2025-11-23 00:36:09.168621403 +0000 UTC m=+1761.246719530" Nov 23 00:36:09 crc kubenswrapper[4743]: I1123 00:36:09.181688 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" podStartSLOduration=2.31492373 podStartE2EDuration="19.181669611s" podCreationTimestamp="2025-11-23 00:35:50 +0000 UTC" firstStartedPulling="2025-11-23 00:35:51.713872674 +0000 UTC m=+1743.791970801" lastFinishedPulling="2025-11-23 00:36:08.580618555 +0000 UTC m=+1760.658716682" observedRunningTime="2025-11-23 00:36:09.178570846 +0000 UTC m=+1761.256668973" watchObservedRunningTime="2025-11-23 00:36:09.181669611 +0000 UTC m=+1761.259767748" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.096904 4743 generic.go:334] "Generic (PLEG): container finished" podID="80cd6983-1a53-4718-a4e1-5d0d0d711e49" containerID="0dfbf993e1b3f2b08e6bc1da7e631ba4404fba3a6ecda187694261fe7d71f4b9" exitCode=0 Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.097006 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerDied","Data":"0dfbf993e1b3f2b08e6bc1da7e631ba4404fba3a6ecda187694261fe7d71f4b9"} Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.097068 4743 scope.go:117] "RemoveContainer" containerID="1f798bd09ff055fe7a1c32f4e5e0a14102f36c40cd1d69ecdcd89eaa74f6c6e9" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.097561 4743 scope.go:117] "RemoveContainer" containerID="0dfbf993e1b3f2b08e6bc1da7e631ba4404fba3a6ecda187694261fe7d71f4b9" Nov 23 00:36:10 crc kubenswrapper[4743]: E1123 00:36:10.097842 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s_service-telemetry(80cd6983-1a53-4718-a4e1-5d0d0d711e49)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" podUID="80cd6983-1a53-4718-a4e1-5d0d0d711e49" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.102442 4743 generic.go:334] "Generic (PLEG): container finished" podID="8d12fd46-053c-430f-9586-8fd69219c820" containerID="fa5aab1c0005a3c21f3642737b4731adb159ce908b3f1cbca3a6db62750f9162" exitCode=0 Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.102538 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerDied","Data":"fa5aab1c0005a3c21f3642737b4731adb159ce908b3f1cbca3a6db62750f9162"} Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.103069 4743 scope.go:117] "RemoveContainer" containerID="fa5aab1c0005a3c21f3642737b4731adb159ce908b3f1cbca3a6db62750f9162" Nov 23 00:36:10 crc kubenswrapper[4743]: E1123 00:36:10.103317 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn_service-telemetry(8d12fd46-053c-430f-9586-8fd69219c820)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" podUID="8d12fd46-053c-430f-9586-8fd69219c820" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.107282 4743 generic.go:334] "Generic (PLEG): container finished" podID="e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a" containerID="a8fc11e47f742ed52c4a2e38a21fbcfbd0fb64f84570f6b0605e7cc4506e0ec5" exitCode=0 Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.107359 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerDied","Data":"a8fc11e47f742ed52c4a2e38a21fbcfbd0fb64f84570f6b0605e7cc4506e0ec5"} Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.107885 4743 scope.go:117] "RemoveContainer" containerID="a8fc11e47f742ed52c4a2e38a21fbcfbd0fb64f84570f6b0605e7cc4506e0ec5" Nov 23 00:36:10 crc kubenswrapper[4743]: E1123 00:36:10.108176 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm_service-telemetry(e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" podUID="e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.114357 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerStarted","Data":"4d57e92fbcd7a9e08498f5e9e262f2d08645ebaf6bef836c1fadce7d0fc77c18"} Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.127757 4743 generic.go:334] "Generic (PLEG): container finished" podID="ffc96728-970f-4f41-afe2-14bb99a1c727" containerID="70514751dc9cb7b0b61c1a5a379f5ca035c2c873b6dfd1e7c2a709b03319ce78" exitCode=0 Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.127854 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerDied","Data":"70514751dc9cb7b0b61c1a5a379f5ca035c2c873b6dfd1e7c2a709b03319ce78"} Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.128395 4743 scope.go:117] "RemoveContainer" containerID="70514751dc9cb7b0b61c1a5a379f5ca035c2c873b6dfd1e7c2a709b03319ce78" Nov 23 00:36:10 crc kubenswrapper[4743]: E1123 00:36:10.128764 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5f89974568-q42fx_service-telemetry(ffc96728-970f-4f41-afe2-14bb99a1c727)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" podUID="ffc96728-970f-4f41-afe2-14bb99a1c727" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.143730 4743 scope.go:117] "RemoveContainer" containerID="0388f1b7ebaea621188beebb87ebd1c6240e36f46a5d1a6a10fa1c8842b9aab7" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.191546 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" podStartSLOduration=4.54935824 podStartE2EDuration="37.191531093s" podCreationTimestamp="2025-11-23 00:35:33 +0000 UTC" firstStartedPulling="2025-11-23 00:35:36.268150407 +0000 UTC m=+1728.346248534" lastFinishedPulling="2025-11-23 00:36:08.91032326 +0000 UTC m=+1760.988421387" observedRunningTime="2025-11-23 00:36:10.188709564 +0000 UTC m=+1762.266807691" watchObservedRunningTime="2025-11-23 00:36:10.191531093 +0000 UTC m=+1762.269629210" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.197867 4743 scope.go:117] "RemoveContainer" containerID="db8995712d823efc767d720db5515a3ff07e6a686373f05fae2174d667f4a924" Nov 23 00:36:10 crc kubenswrapper[4743]: I1123 00:36:10.249504 4743 scope.go:117] "RemoveContainer" containerID="59b37d9adb5a20349a939eee9c7f94c745f0dfdf1b0d816f9234cc6b8d9e5fb5" Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.139783 4743 scope.go:117] "RemoveContainer" containerID="70514751dc9cb7b0b61c1a5a379f5ca035c2c873b6dfd1e7c2a709b03319ce78" Nov 23 00:36:11 crc kubenswrapper[4743]: E1123 00:36:11.140561 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5f89974568-q42fx_service-telemetry(ffc96728-970f-4f41-afe2-14bb99a1c727)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" podUID="ffc96728-970f-4f41-afe2-14bb99a1c727" Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.144662 4743 scope.go:117] "RemoveContainer" containerID="0dfbf993e1b3f2b08e6bc1da7e631ba4404fba3a6ecda187694261fe7d71f4b9" Nov 23 00:36:11 crc kubenswrapper[4743]: E1123 00:36:11.144980 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s_service-telemetry(80cd6983-1a53-4718-a4e1-5d0d0d711e49)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" podUID="80cd6983-1a53-4718-a4e1-5d0d0d711e49" Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.146832 4743 scope.go:117] "RemoveContainer" containerID="fa5aab1c0005a3c21f3642737b4731adb159ce908b3f1cbca3a6db62750f9162" Nov 23 00:36:11 crc kubenswrapper[4743]: E1123 00:36:11.147049 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn_service-telemetry(8d12fd46-053c-430f-9586-8fd69219c820)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" podUID="8d12fd46-053c-430f-9586-8fd69219c820" Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.151616 4743 scope.go:117] "RemoveContainer" containerID="a8fc11e47f742ed52c4a2e38a21fbcfbd0fb64f84570f6b0605e7cc4506e0ec5" Nov 23 00:36:11 crc kubenswrapper[4743]: E1123 00:36:11.151874 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm_service-telemetry(e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" podUID="e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a" Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.162262 4743 generic.go:334] "Generic (PLEG): container finished" podID="ca66272f-dcb2-4bc1-88d8-ed89f3493798" containerID="4d57e92fbcd7a9e08498f5e9e262f2d08645ebaf6bef836c1fadce7d0fc77c18" exitCode=0 Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.162563 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerDied","Data":"4d57e92fbcd7a9e08498f5e9e262f2d08645ebaf6bef836c1fadce7d0fc77c18"} Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.163225 4743 scope.go:117] "RemoveContainer" containerID="4d57e92fbcd7a9e08498f5e9e262f2d08645ebaf6bef836c1fadce7d0fc77c18" Nov 23 00:36:11 crc kubenswrapper[4743]: E1123 00:36:11.163683 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn_service-telemetry(ca66272f-dcb2-4bc1-88d8-ed89f3493798)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" podUID="ca66272f-dcb2-4bc1-88d8-ed89f3493798" Nov 23 00:36:11 crc kubenswrapper[4743]: I1123 00:36:11.163789 4743 scope.go:117] "RemoveContainer" containerID="3c21d7babf45053fa00043ee82c1df6891fca196aa9d2af35545c3d7e4274f65" Nov 23 00:36:12 crc kubenswrapper[4743]: I1123 00:36:12.174979 4743 scope.go:117] "RemoveContainer" containerID="4d57e92fbcd7a9e08498f5e9e262f2d08645ebaf6bef836c1fadce7d0fc77c18" Nov 23 00:36:12 crc kubenswrapper[4743]: E1123 00:36:12.175217 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn_service-telemetry(ca66272f-dcb2-4bc1-88d8-ed89f3493798)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" podUID="ca66272f-dcb2-4bc1-88d8-ed89f3493798" Nov 23 00:36:14 crc kubenswrapper[4743]: I1123 00:36:14.190553 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3f240806-9a77-4dd9-9962-5778151c1902","Type":"ContainerStarted","Data":"39403788d6db946df3b69bb69f673468f7fd30816aec132b5dc20b6619b6fdc5"} Nov 23 00:36:20 crc kubenswrapper[4743]: I1123 00:36:20.722931 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:36:20 crc kubenswrapper[4743]: E1123 00:36:20.723650 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:36:23 crc kubenswrapper[4743]: I1123 00:36:23.723308 4743 scope.go:117] "RemoveContainer" containerID="0dfbf993e1b3f2b08e6bc1da7e631ba4404fba3a6ecda187694261fe7d71f4b9" Nov 23 00:36:23 crc kubenswrapper[4743]: I1123 00:36:23.723831 4743 scope.go:117] "RemoveContainer" containerID="a8fc11e47f742ed52c4a2e38a21fbcfbd0fb64f84570f6b0605e7cc4506e0ec5" Nov 23 00:36:23 crc kubenswrapper[4743]: I1123 00:36:23.724116 4743 scope.go:117] "RemoveContainer" containerID="4d57e92fbcd7a9e08498f5e9e262f2d08645ebaf6bef836c1fadce7d0fc77c18" Nov 23 00:36:24 crc kubenswrapper[4743]: I1123 00:36:24.253046 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm" event={"ID":"e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a","Type":"ContainerStarted","Data":"8130b0487b9bc392fe1f5a0881eb1d3d8accfe2ff823758db9ac6dd8179488c5"} Nov 23 00:36:24 crc kubenswrapper[4743]: I1123 00:36:24.270194 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn" event={"ID":"ca66272f-dcb2-4bc1-88d8-ed89f3493798","Type":"ContainerStarted","Data":"1866b1f14acdae5bf72d4fef4f2d0c67ee6915aa870e053ddf9407ebf98fb746"} Nov 23 00:36:24 crc kubenswrapper[4743]: I1123 00:36:24.273777 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=15.115478152 podStartE2EDuration="1m19.27375443s" podCreationTimestamp="2025-11-23 00:35:05 +0000 UTC" firstStartedPulling="2025-11-23 00:35:09.112852399 +0000 UTC m=+1701.190950526" lastFinishedPulling="2025-11-23 00:36:13.271128677 +0000 UTC m=+1765.349226804" observedRunningTime="2025-11-23 00:36:14.222087822 +0000 UTC m=+1766.300185959" watchObservedRunningTime="2025-11-23 00:36:24.27375443 +0000 UTC m=+1776.351852557" Nov 23 00:36:24 crc kubenswrapper[4743]: I1123 00:36:24.277333 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s" event={"ID":"80cd6983-1a53-4718-a4e1-5d0d0d711e49","Type":"ContainerStarted","Data":"b0d71f9dbba743cd920920eb37cc6e38640a11ed05a71059dc5c4446eb95f7af"} Nov 23 00:36:25 crc kubenswrapper[4743]: I1123 00:36:25.736204 4743 scope.go:117] "RemoveContainer" containerID="fa5aab1c0005a3c21f3642737b4731adb159ce908b3f1cbca3a6db62750f9162" Nov 23 00:36:26 crc kubenswrapper[4743]: I1123 00:36:26.291759 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn" event={"ID":"8d12fd46-053c-430f-9586-8fd69219c820","Type":"ContainerStarted","Data":"b38495b97dd29d95267999ac4cda31424f857004ae98b44648310a540ae361ed"} Nov 23 00:36:26 crc kubenswrapper[4743]: I1123 00:36:26.726906 4743 scope.go:117] "RemoveContainer" containerID="70514751dc9cb7b0b61c1a5a379f5ca035c2c873b6dfd1e7c2a709b03319ce78" Nov 23 00:36:27 crc kubenswrapper[4743]: I1123 00:36:27.300824 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f89974568-q42fx" event={"ID":"ffc96728-970f-4f41-afe2-14bb99a1c727","Type":"ContainerStarted","Data":"791053a1f1b18f2c967484c9c815f43adbb5f9a1f8d076feefba7e212acc3801"} Nov 23 00:36:32 crc kubenswrapper[4743]: I1123 00:36:32.723611 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:36:32 crc kubenswrapper[4743]: E1123 00:36:32.726396 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.787331 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.788628 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.790297 4743 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.793721 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.802865 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.837040 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/be033c0c-fc51-4ec7-818f-d50c65fbda70-qdr-test-config\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.837175 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszr5\" (UniqueName: \"kubernetes.io/projected/be033c0c-fc51-4ec7-818f-d50c65fbda70-kube-api-access-nszr5\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.837212 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/be033c0c-fc51-4ec7-818f-d50c65fbda70-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.937910 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszr5\" (UniqueName: \"kubernetes.io/projected/be033c0c-fc51-4ec7-818f-d50c65fbda70-kube-api-access-nszr5\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.937999 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/be033c0c-fc51-4ec7-818f-d50c65fbda70-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.938079 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/be033c0c-fc51-4ec7-818f-d50c65fbda70-qdr-test-config\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.938918 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/be033c0c-fc51-4ec7-818f-d50c65fbda70-qdr-test-config\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.944700 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/be033c0c-fc51-4ec7-818f-d50c65fbda70-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:39 crc kubenswrapper[4743]: I1123 00:36:39.953285 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszr5\" (UniqueName: \"kubernetes.io/projected/be033c0c-fc51-4ec7-818f-d50c65fbda70-kube-api-access-nszr5\") pod \"qdr-test\" (UID: \"be033c0c-fc51-4ec7-818f-d50c65fbda70\") " pod="service-telemetry/qdr-test" Nov 23 00:36:40 crc kubenswrapper[4743]: I1123 00:36:40.116553 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Nov 23 00:36:40 crc kubenswrapper[4743]: I1123 00:36:40.538391 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Nov 23 00:36:41 crc kubenswrapper[4743]: I1123 00:36:41.410249 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"be033c0c-fc51-4ec7-818f-d50c65fbda70","Type":"ContainerStarted","Data":"c678470bf3e32731ea365f731e67fbe8b55ea1f7b596f230f83f7de0de2b02fb"} Nov 23 00:36:43 crc kubenswrapper[4743]: I1123 00:36:43.722665 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:36:43 crc kubenswrapper[4743]: E1123 00:36:43.723197 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.483588 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"be033c0c-fc51-4ec7-818f-d50c65fbda70","Type":"ContainerStarted","Data":"93b92de5c45ae9be6734222eec3b3d420e702b8c45012120e98dd48c925d7bdd"} Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.503590 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.139108245 podStartE2EDuration="11.503572194s" podCreationTimestamp="2025-11-23 00:36:39 +0000 UTC" firstStartedPulling="2025-11-23 00:36:40.553412685 +0000 UTC m=+1792.631510812" lastFinishedPulling="2025-11-23 00:36:49.917876634 +0000 UTC m=+1801.995974761" observedRunningTime="2025-11-23 00:36:50.497909966 +0000 UTC m=+1802.576008103" watchObservedRunningTime="2025-11-23 00:36:50.503572194 +0000 UTC m=+1802.581670341" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.832372 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hzf8t"] Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.834943 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.836437 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hzf8t"] Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.836461 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.848671 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.848857 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.848895 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.849073 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.849113 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.905245 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-healthcheck-log\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.905478 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-config\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.905676 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.905797 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.905985 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.906111 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-sensubility-config\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:50 crc kubenswrapper[4743]: I1123 00:36:50.906240 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5xb\" (UniqueName: \"kubernetes.io/projected/9fd1a29d-4a6b-43f4-acad-db212a968135-kube-api-access-wj5xb\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007086 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007146 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007182 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007220 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-sensubility-config\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007279 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5xb\" (UniqueName: \"kubernetes.io/projected/9fd1a29d-4a6b-43f4-acad-db212a968135-kube-api-access-wj5xb\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007327 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-healthcheck-log\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.007350 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-config\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.008327 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-config\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.008343 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.008567 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.008665 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-sensubility-config\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.008934 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-healthcheck-log\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.009193 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.039895 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5xb\" (UniqueName: \"kubernetes.io/projected/9fd1a29d-4a6b-43f4-acad-db212a968135-kube-api-access-wj5xb\") pod \"stf-smoketest-smoke1-hzf8t\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.172377 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.211214 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.211999 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.227742 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.311694 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtdz\" (UniqueName: \"kubernetes.io/projected/793b9668-bdda-4921-8770-a0132de1b7b4-kube-api-access-cmtdz\") pod \"curl\" (UID: \"793b9668-bdda-4921-8770-a0132de1b7b4\") " pod="service-telemetry/curl" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.381286 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hzf8t"] Nov 23 00:36:51 crc kubenswrapper[4743]: W1123 00:36:51.403215 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fd1a29d_4a6b_43f4_acad_db212a968135.slice/crio-4d9cb0ed74c125a5783abde1d6cb897c2fce4a2026c7789331a10c07868b122d WatchSource:0}: Error finding container 4d9cb0ed74c125a5783abde1d6cb897c2fce4a2026c7789331a10c07868b122d: Status 404 returned error can't find the container with id 4d9cb0ed74c125a5783abde1d6cb897c2fce4a2026c7789331a10c07868b122d Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.412722 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmtdz\" (UniqueName: \"kubernetes.io/projected/793b9668-bdda-4921-8770-a0132de1b7b4-kube-api-access-cmtdz\") pod \"curl\" (UID: \"793b9668-bdda-4921-8770-a0132de1b7b4\") " pod="service-telemetry/curl" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.428882 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmtdz\" (UniqueName: \"kubernetes.io/projected/793b9668-bdda-4921-8770-a0132de1b7b4-kube-api-access-cmtdz\") pod \"curl\" (UID: \"793b9668-bdda-4921-8770-a0132de1b7b4\") " pod="service-telemetry/curl" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.490628 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" event={"ID":"9fd1a29d-4a6b-43f4-acad-db212a968135","Type":"ContainerStarted","Data":"4d9cb0ed74c125a5783abde1d6cb897c2fce4a2026c7789331a10c07868b122d"} Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.552123 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 23 00:36:51 crc kubenswrapper[4743]: I1123 00:36:51.737981 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Nov 23 00:36:51 crc kubenswrapper[4743]: W1123 00:36:51.744229 4743 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793b9668_bdda_4921_8770_a0132de1b7b4.slice/crio-c1ab7d0b066f097c5ea20d5bee1aa4624c95e3300859d16014b62fdf701360fe WatchSource:0}: Error finding container c1ab7d0b066f097c5ea20d5bee1aa4624c95e3300859d16014b62fdf701360fe: Status 404 returned error can't find the container with id c1ab7d0b066f097c5ea20d5bee1aa4624c95e3300859d16014b62fdf701360fe Nov 23 00:36:52 crc kubenswrapper[4743]: I1123 00:36:52.498657 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"793b9668-bdda-4921-8770-a0132de1b7b4","Type":"ContainerStarted","Data":"c1ab7d0b066f097c5ea20d5bee1aa4624c95e3300859d16014b62fdf701360fe"} Nov 23 00:36:53 crc kubenswrapper[4743]: I1123 00:36:53.511295 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"793b9668-bdda-4921-8770-a0132de1b7b4","Type":"ContainerStarted","Data":"4a9e5651a0c4e317c1e551ff6ab379d363643b5566dbe832e578e2e009b2256c"} Nov 23 00:36:53 crc kubenswrapper[4743]: I1123 00:36:53.534978 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/curl" podStartSLOduration=1.27296477 podStartE2EDuration="2.534948903s" podCreationTimestamp="2025-11-23 00:36:51 +0000 UTC" firstStartedPulling="2025-11-23 00:36:51.746007091 +0000 UTC m=+1803.824105208" lastFinishedPulling="2025-11-23 00:36:53.007991214 +0000 UTC m=+1805.086089341" observedRunningTime="2025-11-23 00:36:53.526198679 +0000 UTC m=+1805.604296876" watchObservedRunningTime="2025-11-23 00:36:53.534948903 +0000 UTC m=+1805.613047070" Nov 23 00:36:54 crc kubenswrapper[4743]: I1123 00:36:54.523105 4743 generic.go:334] "Generic (PLEG): container finished" podID="793b9668-bdda-4921-8770-a0132de1b7b4" containerID="4a9e5651a0c4e317c1e551ff6ab379d363643b5566dbe832e578e2e009b2256c" exitCode=0 Nov 23 00:36:54 crc kubenswrapper[4743]: I1123 00:36:54.523188 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"793b9668-bdda-4921-8770-a0132de1b7b4","Type":"ContainerDied","Data":"4a9e5651a0c4e317c1e551ff6ab379d363643b5566dbe832e578e2e009b2256c"} Nov 23 00:36:54 crc kubenswrapper[4743]: I1123 00:36:54.722424 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:36:54 crc kubenswrapper[4743]: E1123 00:36:54.722671 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:36:56 crc kubenswrapper[4743]: I1123 00:36:56.997619 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.190760 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmtdz\" (UniqueName: \"kubernetes.io/projected/793b9668-bdda-4921-8770-a0132de1b7b4-kube-api-access-cmtdz\") pod \"793b9668-bdda-4921-8770-a0132de1b7b4\" (UID: \"793b9668-bdda-4921-8770-a0132de1b7b4\") " Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.210695 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793b9668-bdda-4921-8770-a0132de1b7b4-kube-api-access-cmtdz" (OuterVolumeSpecName: "kube-api-access-cmtdz") pod "793b9668-bdda-4921-8770-a0132de1b7b4" (UID: "793b9668-bdda-4921-8770-a0132de1b7b4"). InnerVolumeSpecName "kube-api-access-cmtdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.221504 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_793b9668-bdda-4921-8770-a0132de1b7b4/curl/0.log" Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.292304 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmtdz\" (UniqueName: \"kubernetes.io/projected/793b9668-bdda-4921-8770-a0132de1b7b4-kube-api-access-cmtdz\") on node \"crc\" DevicePath \"\"" Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.545821 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"793b9668-bdda-4921-8770-a0132de1b7b4","Type":"ContainerDied","Data":"c1ab7d0b066f097c5ea20d5bee1aa4624c95e3300859d16014b62fdf701360fe"} Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.545860 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ab7d0b066f097c5ea20d5bee1aa4624c95e3300859d16014b62fdf701360fe" Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.545894 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 23 00:36:57 crc kubenswrapper[4743]: I1123 00:36:57.569983 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-bdpk2_21f5036a-6e16-42f6-94c2-d253a043278e/prometheus-webhook-snmp/0.log" Nov 23 00:37:01 crc kubenswrapper[4743]: I1123 00:37:01.599640 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" event={"ID":"9fd1a29d-4a6b-43f4-acad-db212a968135","Type":"ContainerStarted","Data":"0828971a64ba60348e278dd6a9981300d0bc3274e2f082b73d53a6630bc80620"} Nov 23 00:37:08 crc kubenswrapper[4743]: I1123 00:37:08.655678 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" event={"ID":"9fd1a29d-4a6b-43f4-acad-db212a968135","Type":"ContainerStarted","Data":"013db7055f1dc0d5ddb234e07f1138781cae47ed35ff3d2eacf1e6d5cbfc1954"} Nov 23 00:37:08 crc kubenswrapper[4743]: I1123 00:37:08.687296 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" podStartSLOduration=2.402161081 podStartE2EDuration="18.68727399s" podCreationTimestamp="2025-11-23 00:36:50 +0000 UTC" firstStartedPulling="2025-11-23 00:36:51.40623673 +0000 UTC m=+1803.484334857" lastFinishedPulling="2025-11-23 00:37:07.691349639 +0000 UTC m=+1819.769447766" observedRunningTime="2025-11-23 00:37:08.678850994 +0000 UTC m=+1820.756949161" watchObservedRunningTime="2025-11-23 00:37:08.68727399 +0000 UTC m=+1820.765372137" Nov 23 00:37:09 crc kubenswrapper[4743]: I1123 00:37:09.722350 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:37:09 crc kubenswrapper[4743]: E1123 00:37:09.722593 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:37:23 crc kubenswrapper[4743]: I1123 00:37:23.721696 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:37:23 crc kubenswrapper[4743]: E1123 00:37:23.722993 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:37:27 crc kubenswrapper[4743]: I1123 00:37:27.769877 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-bdpk2_21f5036a-6e16-42f6-94c2-d253a043278e/prometheus-webhook-snmp/0.log" Nov 23 00:37:35 crc kubenswrapper[4743]: I1123 00:37:35.859528 4743 generic.go:334] "Generic (PLEG): container finished" podID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerID="0828971a64ba60348e278dd6a9981300d0bc3274e2f082b73d53a6630bc80620" exitCode=1 Nov 23 00:37:35 crc kubenswrapper[4743]: I1123 00:37:35.859827 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" event={"ID":"9fd1a29d-4a6b-43f4-acad-db212a968135","Type":"ContainerDied","Data":"0828971a64ba60348e278dd6a9981300d0bc3274e2f082b73d53a6630bc80620"} Nov 23 00:37:35 crc kubenswrapper[4743]: I1123 00:37:35.860931 4743 scope.go:117] "RemoveContainer" containerID="0828971a64ba60348e278dd6a9981300d0bc3274e2f082b73d53a6630bc80620" Nov 23 00:37:36 crc kubenswrapper[4743]: I1123 00:37:36.722307 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:37:36 crc kubenswrapper[4743]: E1123 00:37:36.722808 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:37:39 crc kubenswrapper[4743]: I1123 00:37:39.893933 4743 generic.go:334] "Generic (PLEG): container finished" podID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerID="013db7055f1dc0d5ddb234e07f1138781cae47ed35ff3d2eacf1e6d5cbfc1954" exitCode=0 Nov 23 00:37:39 crc kubenswrapper[4743]: I1123 00:37:39.894019 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" event={"ID":"9fd1a29d-4a6b-43f4-acad-db212a968135","Type":"ContainerDied","Data":"013db7055f1dc0d5ddb234e07f1138781cae47ed35ff3d2eacf1e6d5cbfc1954"} Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.253865 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.361690 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-config\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.361809 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-entrypoint-script\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.361843 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-sensubility-config\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.361861 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-entrypoint-script\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.361930 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-publisher\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.361945 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-healthcheck-log\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.362015 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj5xb\" (UniqueName: \"kubernetes.io/projected/9fd1a29d-4a6b-43f4-acad-db212a968135-kube-api-access-wj5xb\") pod \"9fd1a29d-4a6b-43f4-acad-db212a968135\" (UID: \"9fd1a29d-4a6b-43f4-acad-db212a968135\") " Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.378739 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.380087 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd1a29d-4a6b-43f4-acad-db212a968135-kube-api-access-wj5xb" (OuterVolumeSpecName: "kube-api-access-wj5xb") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "kube-api-access-wj5xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.380297 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.381728 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.382668 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.382702 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.385730 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "9fd1a29d-4a6b-43f4-acad-db212a968135" (UID: "9fd1a29d-4a6b-43f4-acad-db212a968135"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463771 4743 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463816 4743 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-healthcheck-log\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463831 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj5xb\" (UniqueName: \"kubernetes.io/projected/9fd1a29d-4a6b-43f4-acad-db212a968135-kube-api-access-wj5xb\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463845 4743 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463860 4743 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463874 4743 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-sensubility-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.463890 4743 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9fd1a29d-4a6b-43f4-acad-db212a968135-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.911921 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" event={"ID":"9fd1a29d-4a6b-43f4-acad-db212a968135","Type":"ContainerDied","Data":"4d9cb0ed74c125a5783abde1d6cb897c2fce4a2026c7789331a10c07868b122d"} Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.911971 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9cb0ed74c125a5783abde1d6cb897c2fce4a2026c7789331a10c07868b122d" Nov 23 00:37:41 crc kubenswrapper[4743]: I1123 00:37:41.911973 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hzf8t" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.036152 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qp6sf"] Nov 23 00:37:49 crc kubenswrapper[4743]: E1123 00:37:49.037277 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793b9668-bdda-4921-8770-a0132de1b7b4" containerName="curl" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.037298 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="793b9668-bdda-4921-8770-a0132de1b7b4" containerName="curl" Nov 23 00:37:49 crc kubenswrapper[4743]: E1123 00:37:49.037316 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerName="smoketest-ceilometer" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.037329 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerName="smoketest-ceilometer" Nov 23 00:37:49 crc kubenswrapper[4743]: E1123 00:37:49.037361 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerName="smoketest-collectd" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.037373 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerName="smoketest-collectd" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.037632 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerName="smoketest-collectd" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.037662 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="793b9668-bdda-4921-8770-a0132de1b7b4" containerName="curl" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.037678 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd1a29d-4a6b-43f4-acad-db212a968135" containerName="smoketest-ceilometer" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.039095 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.040384 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qp6sf"] Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.041215 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.042016 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.044087 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.044113 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.044658 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.044827 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.177753 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.178133 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-healthcheck-log\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.178310 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-config\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.178478 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-sensubility-config\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.178730 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.178895 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.179068 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6jw\" (UniqueName: \"kubernetes.io/projected/522a6513-c4a7-4224-bcfa-bbd22f789440-kube-api-access-zv6jw\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280427 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280573 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6jw\" (UniqueName: \"kubernetes.io/projected/522a6513-c4a7-4224-bcfa-bbd22f789440-kube-api-access-zv6jw\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280662 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280735 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-healthcheck-log\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280780 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-config\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280830 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-sensubility-config\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.280897 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.282327 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.283185 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-healthcheck-log\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.283257 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-config\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.283351 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.283702 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-sensubility-config\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.284186 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.312584 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6jw\" (UniqueName: \"kubernetes.io/projected/522a6513-c4a7-4224-bcfa-bbd22f789440-kube-api-access-zv6jw\") pod \"stf-smoketest-smoke1-qp6sf\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.367887 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.854699 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qp6sf"] Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.867550 4743 scope.go:117] "RemoveContainer" containerID="46e160ae9610b02a638920b9e52ac837ba8f39b5d365ca1343b2ba7a05de38fb" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.920063 4743 scope.go:117] "RemoveContainer" containerID="eb91331239e0fa2f884d83de42a4ce883f2f148c4d742d447faad28d05dc54b9" Nov 23 00:37:49 crc kubenswrapper[4743]: I1123 00:37:49.981533 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" event={"ID":"522a6513-c4a7-4224-bcfa-bbd22f789440","Type":"ContainerStarted","Data":"986b42df96bf1e1ac170876b0634cf09dd4be34ce8d2c44e1bf238eef6c96194"} Nov 23 00:37:50 crc kubenswrapper[4743]: I1123 00:37:50.722518 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:37:50 crc kubenswrapper[4743]: E1123 00:37:50.723093 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:37:50 crc kubenswrapper[4743]: I1123 00:37:50.994219 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" event={"ID":"522a6513-c4a7-4224-bcfa-bbd22f789440","Type":"ContainerStarted","Data":"dfceee31a934f44dfe5d59d07116853c23c1e27a66327dbdf85cb9aa5e954a37"} Nov 23 00:37:50 crc kubenswrapper[4743]: I1123 00:37:50.994291 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" event={"ID":"522a6513-c4a7-4224-bcfa-bbd22f789440","Type":"ContainerStarted","Data":"5063fa274416a094e37d6ce63b17490dff331d7498bf5001deaefa3c9832d67e"} Nov 23 00:37:51 crc kubenswrapper[4743]: I1123 00:37:51.022053 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" podStartSLOduration=2.022028017 podStartE2EDuration="2.022028017s" podCreationTimestamp="2025-11-23 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 00:37:51.021731369 +0000 UTC m=+1863.099829546" watchObservedRunningTime="2025-11-23 00:37:51.022028017 +0000 UTC m=+1863.100126184" Nov 23 00:38:03 crc kubenswrapper[4743]: I1123 00:38:03.722196 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:38:03 crc kubenswrapper[4743]: E1123 00:38:03.722949 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:38:15 crc kubenswrapper[4743]: I1123 00:38:15.722829 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:38:15 crc kubenswrapper[4743]: E1123 00:38:15.723616 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:38:23 crc kubenswrapper[4743]: I1123 00:38:23.270958 4743 generic.go:334] "Generic (PLEG): container finished" podID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerID="dfceee31a934f44dfe5d59d07116853c23c1e27a66327dbdf85cb9aa5e954a37" exitCode=0 Nov 23 00:38:23 crc kubenswrapper[4743]: I1123 00:38:23.271116 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" event={"ID":"522a6513-c4a7-4224-bcfa-bbd22f789440","Type":"ContainerDied","Data":"dfceee31a934f44dfe5d59d07116853c23c1e27a66327dbdf85cb9aa5e954a37"} Nov 23 00:38:23 crc kubenswrapper[4743]: I1123 00:38:23.272361 4743 scope.go:117] "RemoveContainer" containerID="dfceee31a934f44dfe5d59d07116853c23c1e27a66327dbdf85cb9aa5e954a37" Nov 23 00:38:24 crc kubenswrapper[4743]: I1123 00:38:24.282385 4743 generic.go:334] "Generic (PLEG): container finished" podID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerID="5063fa274416a094e37d6ce63b17490dff331d7498bf5001deaefa3c9832d67e" exitCode=0 Nov 23 00:38:24 crc kubenswrapper[4743]: I1123 00:38:24.282448 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" event={"ID":"522a6513-c4a7-4224-bcfa-bbd22f789440","Type":"ContainerDied","Data":"5063fa274416a094e37d6ce63b17490dff331d7498bf5001deaefa3c9832d67e"} Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.602584 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670272 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv6jw\" (UniqueName: \"kubernetes.io/projected/522a6513-c4a7-4224-bcfa-bbd22f789440-kube-api-access-zv6jw\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670609 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-sensubility-config\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670667 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-config\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670699 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-healthcheck-log\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670734 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-entrypoint-script\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670749 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-publisher\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.670840 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-entrypoint-script\") pod \"522a6513-c4a7-4224-bcfa-bbd22f789440\" (UID: \"522a6513-c4a7-4224-bcfa-bbd22f789440\") " Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.675702 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522a6513-c4a7-4224-bcfa-bbd22f789440-kube-api-access-zv6jw" (OuterVolumeSpecName: "kube-api-access-zv6jw") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "kube-api-access-zv6jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.689878 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.691366 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.692123 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.692260 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.692952 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.696322 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "522a6513-c4a7-4224-bcfa-bbd22f789440" (UID: "522a6513-c4a7-4224-bcfa-bbd22f789440"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773014 4743 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773048 4743 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-healthcheck-log\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773058 4743 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773069 4743 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773079 4743 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773088 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv6jw\" (UniqueName: \"kubernetes.io/projected/522a6513-c4a7-4224-bcfa-bbd22f789440-kube-api-access-zv6jw\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:25 crc kubenswrapper[4743]: I1123 00:38:25.773097 4743 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/522a6513-c4a7-4224-bcfa-bbd22f789440-sensubility-config\") on node \"crc\" DevicePath \"\"" Nov 23 00:38:26 crc kubenswrapper[4743]: I1123 00:38:26.299918 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" event={"ID":"522a6513-c4a7-4224-bcfa-bbd22f789440","Type":"ContainerDied","Data":"986b42df96bf1e1ac170876b0634cf09dd4be34ce8d2c44e1bf238eef6c96194"} Nov 23 00:38:26 crc kubenswrapper[4743]: I1123 00:38:26.299957 4743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986b42df96bf1e1ac170876b0634cf09dd4be34ce8d2c44e1bf238eef6c96194" Nov 23 00:38:26 crc kubenswrapper[4743]: I1123 00:38:26.300498 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qp6sf" Nov 23 00:38:27 crc kubenswrapper[4743]: I1123 00:38:27.812743 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-hzf8t_9fd1a29d-4a6b-43f4-acad-db212a968135/smoketest-collectd/0.log" Nov 23 00:38:28 crc kubenswrapper[4743]: I1123 00:38:28.118289 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-hzf8t_9fd1a29d-4a6b-43f4-acad-db212a968135/smoketest-ceilometer/0.log" Nov 23 00:38:28 crc kubenswrapper[4743]: I1123 00:38:28.385866 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-t4wtq_b4f28bbd-2af7-4491-aae2-8fe3dddec07b/default-interconnect/0.log" Nov 23 00:38:28 crc kubenswrapper[4743]: I1123 00:38:28.696437 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn_ca66272f-dcb2-4bc1-88d8-ed89f3493798/bridge/2.log" Nov 23 00:38:29 crc kubenswrapper[4743]: I1123 00:38:29.083666 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-bqlzn_ca66272f-dcb2-4bc1-88d8-ed89f3493798/sg-core/0.log" Nov 23 00:38:29 crc kubenswrapper[4743]: I1123 00:38:29.457464 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5f89974568-q42fx_ffc96728-970f-4f41-afe2-14bb99a1c727/bridge/2.log" Nov 23 00:38:29 crc kubenswrapper[4743]: I1123 00:38:29.748792 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5f89974568-q42fx_ffc96728-970f-4f41-afe2-14bb99a1c727/sg-core/0.log" Nov 23 00:38:30 crc kubenswrapper[4743]: I1123 00:38:30.090213 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm_e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a/bridge/2.log" Nov 23 00:38:30 crc kubenswrapper[4743]: I1123 00:38:30.446138 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-jrwjm_e2ea53ae-9c57-4fd9-95f7-f52a8f11fc2a/sg-core/0.log" Nov 23 00:38:30 crc kubenswrapper[4743]: I1123 00:38:30.724532 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:38:30 crc kubenswrapper[4743]: E1123 00:38:30.724814 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:38:30 crc kubenswrapper[4743]: I1123 00:38:30.836097 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s_80cd6983-1a53-4718-a4e1-5d0d0d711e49/bridge/2.log" Nov 23 00:38:31 crc kubenswrapper[4743]: I1123 00:38:31.162309 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7c6f5fff5-4lz5s_80cd6983-1a53-4718-a4e1-5d0d0d711e49/sg-core/0.log" Nov 23 00:38:31 crc kubenswrapper[4743]: I1123 00:38:31.479992 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn_8d12fd46-053c-430f-9586-8fd69219c820/bridge/2.log" Nov 23 00:38:31 crc kubenswrapper[4743]: I1123 00:38:31.765265 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-nv8cn_8d12fd46-053c-430f-9586-8fd69219c820/sg-core/0.log" Nov 23 00:38:35 crc kubenswrapper[4743]: I1123 00:38:35.009663 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-866b58b8fd-m7dsv_adecaf84-7fbf-45a3-91ee-aee4f96249df/operator/0.log" Nov 23 00:38:35 crc kubenswrapper[4743]: I1123 00:38:35.375016 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_3f240806-9a77-4dd9-9962-5778151c1902/prometheus/0.log" Nov 23 00:38:35 crc kubenswrapper[4743]: I1123 00:38:35.751016 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_a08f9692-946d-486d-97ef-080401aabf58/elasticsearch/0.log" Nov 23 00:38:36 crc kubenswrapper[4743]: I1123 00:38:36.187908 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-bdpk2_21f5036a-6e16-42f6-94c2-d253a043278e/prometheus-webhook-snmp/0.log" Nov 23 00:38:36 crc kubenswrapper[4743]: I1123 00:38:36.579666 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_17173603-6315-4375-8b4f-75b534bb9af2/alertmanager/0.log" Nov 23 00:38:41 crc kubenswrapper[4743]: I1123 00:38:41.723325 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:38:41 crc kubenswrapper[4743]: E1123 00:38:41.724546 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:38:49 crc kubenswrapper[4743]: I1123 00:38:49.981049 4743 scope.go:117] "RemoveContainer" containerID="08eb692bd0713878d017855b58e0d0ffe5df9cdcb6027c919f630352e5a1347f" Nov 23 00:38:50 crc kubenswrapper[4743]: I1123 00:38:50.011705 4743 scope.go:117] "RemoveContainer" containerID="4720ea2a5a1ce0a3c9ab78fbaa6b3d28ba60df6e90561808edba19cf8c272758" Nov 23 00:38:53 crc kubenswrapper[4743]: I1123 00:38:53.247626 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-594cccf5b4-hv9hp_e9bf573d-e75b-4d63-adc0-c4812688b371/operator/0.log" Nov 23 00:38:53 crc kubenswrapper[4743]: I1123 00:38:53.721970 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:38:53 crc kubenswrapper[4743]: E1123 00:38:53.722266 4743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cxtxv_openshift-machine-config-operator(dbda6ee4-c567-4104-9c7a-ca01c6f9d989)\"" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" Nov 23 00:38:56 crc kubenswrapper[4743]: I1123 00:38:56.320391 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-866b58b8fd-m7dsv_adecaf84-7fbf-45a3-91ee-aee4f96249df/operator/0.log" Nov 23 00:38:56 crc kubenswrapper[4743]: I1123 00:38:56.667894 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_be033c0c-fc51-4ec7-818f-d50c65fbda70/qdr/0.log" Nov 23 00:39:04 crc kubenswrapper[4743]: I1123 00:39:04.722464 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:39:05 crc kubenswrapper[4743]: I1123 00:39:05.631401 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"b38f2be124c245f85f6bf4d98448b81d3ad2669826bffd6a59119fd3b3f8421a"} Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.544156 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lmwwn/must-gather-qg2kk"] Nov 23 00:39:33 crc kubenswrapper[4743]: E1123 00:39:33.544767 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerName="smoketest-collectd" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.544777 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerName="smoketest-collectd" Nov 23 00:39:33 crc kubenswrapper[4743]: E1123 00:39:33.544790 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerName="smoketest-ceilometer" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.544797 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerName="smoketest-ceilometer" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.544917 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerName="smoketest-ceilometer" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.544927 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="522a6513-c4a7-4224-bcfa-bbd22f789440" containerName="smoketest-collectd" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.545585 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.547968 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lmwwn"/"kube-root-ca.crt" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.552071 4743 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lmwwn"/"default-dockercfg-ddfn5" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.552333 4743 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lmwwn"/"openshift-service-ca.crt" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.558667 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lmwwn/must-gather-qg2kk"] Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.718992 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggrq\" (UniqueName: \"kubernetes.io/projected/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-kube-api-access-hggrq\") pod \"must-gather-qg2kk\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.719075 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-must-gather-output\") pod \"must-gather-qg2kk\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.820017 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hggrq\" (UniqueName: \"kubernetes.io/projected/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-kube-api-access-hggrq\") pod \"must-gather-qg2kk\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.820101 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-must-gather-output\") pod \"must-gather-qg2kk\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.821568 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-must-gather-output\") pod \"must-gather-qg2kk\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.843257 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hggrq\" (UniqueName: \"kubernetes.io/projected/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-kube-api-access-hggrq\") pod \"must-gather-qg2kk\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:33 crc kubenswrapper[4743]: I1123 00:39:33.862519 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:39:34 crc kubenswrapper[4743]: I1123 00:39:34.175974 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lmwwn/must-gather-qg2kk"] Nov 23 00:39:34 crc kubenswrapper[4743]: I1123 00:39:34.870055 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" event={"ID":"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c","Type":"ContainerStarted","Data":"92d96abdbdef7580be0276e504a9ae4fa0e2b3bfbcdd96790ee92cd1c062a843"} Nov 23 00:39:42 crc kubenswrapper[4743]: I1123 00:39:42.950426 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" event={"ID":"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c","Type":"ContainerStarted","Data":"b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0"} Nov 23 00:39:43 crc kubenswrapper[4743]: I1123 00:39:43.958752 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" event={"ID":"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c","Type":"ContainerStarted","Data":"3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e"} Nov 23 00:39:43 crc kubenswrapper[4743]: I1123 00:39:43.975573 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" podStartSLOduration=2.552181676 podStartE2EDuration="10.975554009s" podCreationTimestamp="2025-11-23 00:39:33 +0000 UTC" firstStartedPulling="2025-11-23 00:39:34.187918998 +0000 UTC m=+1966.266017125" lastFinishedPulling="2025-11-23 00:39:42.611291331 +0000 UTC m=+1974.689389458" observedRunningTime="2025-11-23 00:39:43.97108475 +0000 UTC m=+1976.049182887" watchObservedRunningTime="2025-11-23 00:39:43.975554009 +0000 UTC m=+1976.053652136" Nov 23 00:40:21 crc kubenswrapper[4743]: I1123 00:40:21.818301 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hw6zj_ac9135b1-ff1e-460b-8c71-84b1f15317fa/control-plane-machine-set-operator/0.log" Nov 23 00:40:21 crc kubenswrapper[4743]: I1123 00:40:21.933743 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-njxkk_6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022/kube-rbac-proxy/0.log" Nov 23 00:40:22 crc kubenswrapper[4743]: I1123 00:40:22.008144 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-njxkk_6d2badcd-aaf0-43e6-ae8e-7ce25bc7b022/machine-api-operator/0.log" Nov 23 00:40:33 crc kubenswrapper[4743]: I1123 00:40:33.704875 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8c446_70528ba7-8ac2-4d82-b61e-22d639fe36ab/cert-manager-controller/0.log" Nov 23 00:40:33 crc kubenswrapper[4743]: I1123 00:40:33.908263 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-g8t2d_2f55cc68-e977-46ce-8299-a57d98984025/cert-manager-cainjector/0.log" Nov 23 00:40:33 crc kubenswrapper[4743]: I1123 00:40:33.963199 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-7rvjt_4c6aa4f6-06ad-43da-afcb-a4f489468654/cert-manager-webhook/0.log" Nov 23 00:40:48 crc kubenswrapper[4743]: I1123 00:40:48.650559 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/util/0.log" Nov 23 00:40:48 crc kubenswrapper[4743]: I1123 00:40:48.809315 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/util/0.log" Nov 23 00:40:48 crc kubenswrapper[4743]: I1123 00:40:48.833733 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/pull/0.log" Nov 23 00:40:48 crc kubenswrapper[4743]: I1123 00:40:48.856686 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/pull/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.007693 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/util/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.070127 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/pull/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.087424 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apnrmh_7df6008c-cc2b-4422-a7d4-c02b91c052a6/extract/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.195378 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/util/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.360461 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/pull/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.381235 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/pull/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.394809 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/util/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.538469 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/util/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.568826 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/extract/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.574248 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105r78f_a46c6df3-6857-4e57-bfe3-e6dd3c3a3484/pull/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.733924 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/util/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.899102 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/util/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.902966 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/pull/0.log" Nov 23 00:40:49 crc kubenswrapper[4743]: I1123 00:40:49.906585 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/pull/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.057686 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/extract/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.059269 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/util/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.063210 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fprxns_cfd3bf60-7bfe-4a47-940d-a1e8b864d77d/pull/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.232910 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/util/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.400574 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/util/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.460771 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/pull/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.509863 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/pull/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.580855 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/util/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.622845 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/pull/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.648663 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egs9dr_2f527e47-438e-4dbe-81e3-2528e5a46677/extract/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.799015 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/extract-utilities/0.log" Nov 23 00:40:50 crc kubenswrapper[4743]: I1123 00:40:50.989016 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/extract-utilities/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.023577 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/extract-content/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.028077 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/extract-content/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.150171 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/extract-content/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.157129 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/extract-utilities/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.391857 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/extract-utilities/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.528932 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/extract-utilities/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.592375 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g26ff_25831da8-a752-4cf2-9154-8cc119484cdf/registry-server/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.601686 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/extract-content/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.624791 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/extract-content/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.763519 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/extract-utilities/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.852282 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b7nhm_8f9276af-fa7c-4f89-a06e-0e89e2bcc76d/marketplace-operator/0.log" Nov 23 00:40:51 crc kubenswrapper[4743]: I1123 00:40:51.906468 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/extract-content/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.097424 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/extract-utilities/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.115630 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kvts5_100bf8b6-d36a-46bc-ba47-ea537ea03f87/registry-server/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.245644 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/extract-utilities/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.284940 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/extract-content/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.297996 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/extract-content/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.433087 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/extract-utilities/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.459525 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/extract-content/0.log" Nov 23 00:40:52 crc kubenswrapper[4743]: I1123 00:40:52.787089 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c555x_fb0dbcfe-3399-4146-b367-97582fb884cc/registry-server/0.log" Nov 23 00:41:03 crc kubenswrapper[4743]: I1123 00:41:03.840851 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-dkd8f_ba332c3a-1550-4dba-856c-13ec50c7f04a/prometheus-operator/0.log" Nov 23 00:41:04 crc kubenswrapper[4743]: I1123 00:41:04.017691 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b8d57bdc6-ggczb_a8b59a3a-47f6-4efb-8851-d64094821b88/prometheus-operator-admission-webhook/0.log" Nov 23 00:41:04 crc kubenswrapper[4743]: I1123 00:41:04.036594 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b8d57bdc6-7wjbq_df083e33-3f8a-4094-8e61-3ce2fd8cea48/prometheus-operator-admission-webhook/0.log" Nov 23 00:41:04 crc kubenswrapper[4743]: I1123 00:41:04.183472 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-x8gdr_69e0aa0b-787b-4283-9e2c-3bdad984d8c0/operator/0.log" Nov 23 00:41:04 crc kubenswrapper[4743]: I1123 00:41:04.190399 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-28x4p_b01c1d4f-e027-408d-9d5f-c7c7006aa50f/perses-operator/0.log" Nov 23 00:41:06 crc kubenswrapper[4743]: I1123 00:41:06.981835 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6zhb2"] Nov 23 00:41:06 crc kubenswrapper[4743]: I1123 00:41:06.983711 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:06 crc kubenswrapper[4743]: I1123 00:41:06.995853 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6zhb2"] Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.071094 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57dg\" (UniqueName: \"kubernetes.io/projected/42cdbf14-ea0a-42a7-8416-f57d8739cebd-kube-api-access-b57dg\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.071188 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-catalog-content\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.071233 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-utilities\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.172856 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-utilities\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.172953 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57dg\" (UniqueName: \"kubernetes.io/projected/42cdbf14-ea0a-42a7-8416-f57d8739cebd-kube-api-access-b57dg\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.173002 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-catalog-content\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.173374 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-utilities\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.173429 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-catalog-content\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.206671 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57dg\" (UniqueName: \"kubernetes.io/projected/42cdbf14-ea0a-42a7-8416-f57d8739cebd-kube-api-access-b57dg\") pod \"redhat-operators-6zhb2\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.307528 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.517314 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6zhb2"] Nov 23 00:41:07 crc kubenswrapper[4743]: I1123 00:41:07.613971 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerStarted","Data":"820a5edb857446969052493bc984dc9634b0db27878107050670db160c9e2807"} Nov 23 00:41:08 crc kubenswrapper[4743]: I1123 00:41:08.620776 4743 generic.go:334] "Generic (PLEG): container finished" podID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerID="72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7" exitCode=0 Nov 23 00:41:08 crc kubenswrapper[4743]: I1123 00:41:08.620864 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerDied","Data":"72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7"} Nov 23 00:41:08 crc kubenswrapper[4743]: I1123 00:41:08.623207 4743 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 00:41:09 crc kubenswrapper[4743]: I1123 00:41:09.630359 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerStarted","Data":"b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab"} Nov 23 00:41:10 crc kubenswrapper[4743]: I1123 00:41:10.641499 4743 generic.go:334] "Generic (PLEG): container finished" podID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerID="b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab" exitCode=0 Nov 23 00:41:10 crc kubenswrapper[4743]: I1123 00:41:10.641594 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerDied","Data":"b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab"} Nov 23 00:41:11 crc kubenswrapper[4743]: I1123 00:41:11.651176 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerStarted","Data":"33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719"} Nov 23 00:41:17 crc kubenswrapper[4743]: I1123 00:41:17.308413 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:17 crc kubenswrapper[4743]: I1123 00:41:17.308957 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:18 crc kubenswrapper[4743]: I1123 00:41:18.348498 4743 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6zhb2" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="registry-server" probeResult="failure" output=< Nov 23 00:41:18 crc kubenswrapper[4743]: timeout: failed to connect service ":50051" within 1s Nov 23 00:41:18 crc kubenswrapper[4743]: > Nov 23 00:41:23 crc kubenswrapper[4743]: I1123 00:41:23.690276 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:41:23 crc kubenswrapper[4743]: I1123 00:41:23.690951 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:41:27 crc kubenswrapper[4743]: I1123 00:41:27.373861 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:27 crc kubenswrapper[4743]: I1123 00:41:27.399368 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6zhb2" podStartSLOduration=18.893176582 podStartE2EDuration="21.399346163s" podCreationTimestamp="2025-11-23 00:41:06 +0000 UTC" firstStartedPulling="2025-11-23 00:41:08.622943152 +0000 UTC m=+2060.701041289" lastFinishedPulling="2025-11-23 00:41:11.129112743 +0000 UTC m=+2063.207210870" observedRunningTime="2025-11-23 00:41:11.681351585 +0000 UTC m=+2063.759449712" watchObservedRunningTime="2025-11-23 00:41:27.399346163 +0000 UTC m=+2079.477444330" Nov 23 00:41:27 crc kubenswrapper[4743]: I1123 00:41:27.435464 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:27 crc kubenswrapper[4743]: I1123 00:41:27.615990 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6zhb2"] Nov 23 00:41:28 crc kubenswrapper[4743]: I1123 00:41:28.795094 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6zhb2" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="registry-server" containerID="cri-o://33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719" gracePeriod=2 Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.158901 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.271579 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-utilities\") pod \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.271644 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b57dg\" (UniqueName: \"kubernetes.io/projected/42cdbf14-ea0a-42a7-8416-f57d8739cebd-kube-api-access-b57dg\") pod \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.271806 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-catalog-content\") pod \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\" (UID: \"42cdbf14-ea0a-42a7-8416-f57d8739cebd\") " Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.272507 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-utilities" (OuterVolumeSpecName: "utilities") pod "42cdbf14-ea0a-42a7-8416-f57d8739cebd" (UID: "42cdbf14-ea0a-42a7-8416-f57d8739cebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.278634 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cdbf14-ea0a-42a7-8416-f57d8739cebd-kube-api-access-b57dg" (OuterVolumeSpecName: "kube-api-access-b57dg") pod "42cdbf14-ea0a-42a7-8416-f57d8739cebd" (UID: "42cdbf14-ea0a-42a7-8416-f57d8739cebd"). InnerVolumeSpecName "kube-api-access-b57dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.375179 4743 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.375221 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b57dg\" (UniqueName: \"kubernetes.io/projected/42cdbf14-ea0a-42a7-8416-f57d8739cebd-kube-api-access-b57dg\") on node \"crc\" DevicePath \"\"" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.392591 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42cdbf14-ea0a-42a7-8416-f57d8739cebd" (UID: "42cdbf14-ea0a-42a7-8416-f57d8739cebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.476085 4743 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cdbf14-ea0a-42a7-8416-f57d8739cebd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.802187 4743 generic.go:334] "Generic (PLEG): container finished" podID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerID="33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719" exitCode=0 Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.802227 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerDied","Data":"33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719"} Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.802254 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zhb2" event={"ID":"42cdbf14-ea0a-42a7-8416-f57d8739cebd","Type":"ContainerDied","Data":"820a5edb857446969052493bc984dc9634b0db27878107050670db160c9e2807"} Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.802271 4743 scope.go:117] "RemoveContainer" containerID="33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.802285 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zhb2" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.820306 4743 scope.go:117] "RemoveContainer" containerID="b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.838660 4743 scope.go:117] "RemoveContainer" containerID="72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.883966 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6zhb2"] Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.887452 4743 scope.go:117] "RemoveContainer" containerID="33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719" Nov 23 00:41:29 crc kubenswrapper[4743]: E1123 00:41:29.887871 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719\": container with ID starting with 33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719 not found: ID does not exist" containerID="33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.887908 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719"} err="failed to get container status \"33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719\": rpc error: code = NotFound desc = could not find container \"33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719\": container with ID starting with 33f9f5d394e8849440fa60fa9ba31fcf8c4feb62f8415844cdd7031bf45c0719 not found: ID does not exist" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.887933 4743 scope.go:117] "RemoveContainer" containerID="b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab" Nov 23 00:41:29 crc kubenswrapper[4743]: E1123 00:41:29.888130 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab\": container with ID starting with b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab not found: ID does not exist" containerID="b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.888158 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab"} err="failed to get container status \"b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab\": rpc error: code = NotFound desc = could not find container \"b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab\": container with ID starting with b620f14c5bc789d0d5bb30e23050a798dc4562d531f11c8b43841406b03922ab not found: ID does not exist" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.888175 4743 scope.go:117] "RemoveContainer" containerID="72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7" Nov 23 00:41:29 crc kubenswrapper[4743]: E1123 00:41:29.888372 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7\": container with ID starting with 72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7 not found: ID does not exist" containerID="72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.888399 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7"} err="failed to get container status \"72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7\": rpc error: code = NotFound desc = could not find container \"72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7\": container with ID starting with 72b3bc33bfbe814cd9a73a6914957e5fea34021e3668979c1874313883493ed7 not found: ID does not exist" Nov 23 00:41:29 crc kubenswrapper[4743]: I1123 00:41:29.892266 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6zhb2"] Nov 23 00:41:30 crc kubenswrapper[4743]: I1123 00:41:30.731151 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" path="/var/lib/kubelet/pods/42cdbf14-ea0a-42a7-8416-f57d8739cebd/volumes" Nov 23 00:41:47 crc kubenswrapper[4743]: I1123 00:41:47.994728 4743 generic.go:334] "Generic (PLEG): container finished" podID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerID="b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0" exitCode=0 Nov 23 00:41:47 crc kubenswrapper[4743]: I1123 00:41:47.994806 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" event={"ID":"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c","Type":"ContainerDied","Data":"b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0"} Nov 23 00:41:47 crc kubenswrapper[4743]: I1123 00:41:47.995849 4743 scope.go:117] "RemoveContainer" containerID="b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0" Nov 23 00:41:48 crc kubenswrapper[4743]: I1123 00:41:48.902467 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmwwn_must-gather-qg2kk_adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c/gather/0.log" Nov 23 00:41:53 crc kubenswrapper[4743]: I1123 00:41:53.689879 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:41:53 crc kubenswrapper[4743]: I1123 00:41:53.690508 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.366475 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lmwwn/must-gather-qg2kk"] Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.366819 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="copy" containerID="cri-o://3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e" gracePeriod=2 Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.375264 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lmwwn/must-gather-qg2kk"] Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.751083 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmwwn_must-gather-qg2kk_adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c/copy/0.log" Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.751696 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.934603 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-must-gather-output\") pod \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.934704 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hggrq\" (UniqueName: \"kubernetes.io/projected/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-kube-api-access-hggrq\") pod \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\" (UID: \"adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c\") " Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.941845 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-kube-api-access-hggrq" (OuterVolumeSpecName: "kube-api-access-hggrq") pod "adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" (UID: "adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c"). InnerVolumeSpecName "kube-api-access-hggrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:41:55 crc kubenswrapper[4743]: I1123 00:41:55.984566 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" (UID: "adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.036218 4743 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.036251 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hggrq\" (UniqueName: \"kubernetes.io/projected/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c-kube-api-access-hggrq\") on node \"crc\" DevicePath \"\"" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.068930 4743 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmwwn_must-gather-qg2kk_adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c/copy/0.log" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.069406 4743 generic.go:334] "Generic (PLEG): container finished" podID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerID="3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e" exitCode=143 Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.069468 4743 scope.go:117] "RemoveContainer" containerID="3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.069467 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmwwn/must-gather-qg2kk" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.088557 4743 scope.go:117] "RemoveContainer" containerID="b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.120737 4743 scope.go:117] "RemoveContainer" containerID="3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e" Nov 23 00:41:56 crc kubenswrapper[4743]: E1123 00:41:56.121341 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e\": container with ID starting with 3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e not found: ID does not exist" containerID="3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.121383 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e"} err="failed to get container status \"3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e\": rpc error: code = NotFound desc = could not find container \"3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e\": container with ID starting with 3478308b702e8b411c92422244ff798804aeeec5b5126db5daacecf97b4c067e not found: ID does not exist" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.121415 4743 scope.go:117] "RemoveContainer" containerID="b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0" Nov 23 00:41:56 crc kubenswrapper[4743]: E1123 00:41:56.121868 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0\": container with ID starting with b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0 not found: ID does not exist" containerID="b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.121901 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0"} err="failed to get container status \"b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0\": rpc error: code = NotFound desc = could not find container \"b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0\": container with ID starting with b726a4263f53865eb9d86fb9cbe84aefaf29598160a08c64fa7a8b3dbcc6e5c0 not found: ID does not exist" Nov 23 00:41:56 crc kubenswrapper[4743]: I1123 00:41:56.746282 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" path="/var/lib/kubelet/pods/adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c/volumes" Nov 23 00:42:23 crc kubenswrapper[4743]: I1123 00:42:23.690650 4743 patch_prober.go:28] interesting pod/machine-config-daemon-cxtxv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 00:42:23 crc kubenswrapper[4743]: I1123 00:42:23.691228 4743 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 00:42:23 crc kubenswrapper[4743]: I1123 00:42:23.691288 4743 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" Nov 23 00:42:23 crc kubenswrapper[4743]: I1123 00:42:23.692116 4743 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b38f2be124c245f85f6bf4d98448b81d3ad2669826bffd6a59119fd3b3f8421a"} pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 00:42:23 crc kubenswrapper[4743]: I1123 00:42:23.692209 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" podUID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerName="machine-config-daemon" containerID="cri-o://b38f2be124c245f85f6bf4d98448b81d3ad2669826bffd6a59119fd3b3f8421a" gracePeriod=600 Nov 23 00:42:24 crc kubenswrapper[4743]: I1123 00:42:24.330658 4743 generic.go:334] "Generic (PLEG): container finished" podID="dbda6ee4-c567-4104-9c7a-ca01c6f9d989" containerID="b38f2be124c245f85f6bf4d98448b81d3ad2669826bffd6a59119fd3b3f8421a" exitCode=0 Nov 23 00:42:24 crc kubenswrapper[4743]: I1123 00:42:24.330752 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerDied","Data":"b38f2be124c245f85f6bf4d98448b81d3ad2669826bffd6a59119fd3b3f8421a"} Nov 23 00:42:24 crc kubenswrapper[4743]: I1123 00:42:24.331293 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cxtxv" event={"ID":"dbda6ee4-c567-4104-9c7a-ca01c6f9d989","Type":"ContainerStarted","Data":"552f3afd0e6b6bad1ae2bd5337822c20a180d8b76cfd591f29b2a6f35e57c021"} Nov 23 00:42:24 crc kubenswrapper[4743]: I1123 00:42:24.331317 4743 scope.go:117] "RemoveContainer" containerID="bb7826f244e98e0065f4905e0b7040092e1e0ba4fb816eab0672809e4cf78e15" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.360746 4743 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-vfj7h"] Nov 23 00:42:47 crc kubenswrapper[4743]: E1123 00:42:47.363200 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="gather" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363231 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="gather" Nov 23 00:42:47 crc kubenswrapper[4743]: E1123 00:42:47.363257 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="copy" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363270 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="copy" Nov 23 00:42:47 crc kubenswrapper[4743]: E1123 00:42:47.363304 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="registry-server" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363317 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="registry-server" Nov 23 00:42:47 crc kubenswrapper[4743]: E1123 00:42:47.363340 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="extract-utilities" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363353 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="extract-utilities" Nov 23 00:42:47 crc kubenswrapper[4743]: E1123 00:42:47.363366 4743 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="extract-content" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363378 4743 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="extract-content" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363608 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cdbf14-ea0a-42a7-8416-f57d8739cebd" containerName="registry-server" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363634 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="copy" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.363661 4743 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf8dee2-9e22-4ccc-9614-7c8c6aa03d8c" containerName="gather" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.364343 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.365433 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-vfj7h"] Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.559975 4743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5vpx\" (UniqueName: \"kubernetes.io/projected/b9c51b1b-d686-42d0-8661-bb06b9a1d355-kube-api-access-p5vpx\") pod \"service-telemetry-framework-operators-vfj7h\" (UID: \"b9c51b1b-d686-42d0-8661-bb06b9a1d355\") " pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.661589 4743 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5vpx\" (UniqueName: \"kubernetes.io/projected/b9c51b1b-d686-42d0-8661-bb06b9a1d355-kube-api-access-p5vpx\") pod \"service-telemetry-framework-operators-vfj7h\" (UID: \"b9c51b1b-d686-42d0-8661-bb06b9a1d355\") " pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.685826 4743 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5vpx\" (UniqueName: \"kubernetes.io/projected/b9c51b1b-d686-42d0-8661-bb06b9a1d355-kube-api-access-p5vpx\") pod \"service-telemetry-framework-operators-vfj7h\" (UID: \"b9c51b1b-d686-42d0-8661-bb06b9a1d355\") " pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.690627 4743 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:47 crc kubenswrapper[4743]: I1123 00:42:47.920304 4743 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-vfj7h"] Nov 23 00:42:48 crc kubenswrapper[4743]: I1123 00:42:48.537722 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" event={"ID":"b9c51b1b-d686-42d0-8661-bb06b9a1d355","Type":"ContainerStarted","Data":"bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5"} Nov 23 00:42:48 crc kubenswrapper[4743]: I1123 00:42:48.538070 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" event={"ID":"b9c51b1b-d686-42d0-8661-bb06b9a1d355","Type":"ContainerStarted","Data":"357ff9f4e0c2a91c959d08aadf0dc0cf552868c397979fe6f586c313e4baeb9d"} Nov 23 00:42:48 crc kubenswrapper[4743]: I1123 00:42:48.559913 4743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" podStartSLOduration=1.469289495 podStartE2EDuration="1.559892523s" podCreationTimestamp="2025-11-23 00:42:47 +0000 UTC" firstStartedPulling="2025-11-23 00:42:47.929734288 +0000 UTC m=+2160.007832415" lastFinishedPulling="2025-11-23 00:42:48.020337316 +0000 UTC m=+2160.098435443" observedRunningTime="2025-11-23 00:42:48.556628323 +0000 UTC m=+2160.634726530" watchObservedRunningTime="2025-11-23 00:42:48.559892523 +0000 UTC m=+2160.637990660" Nov 23 00:42:57 crc kubenswrapper[4743]: I1123 00:42:57.691189 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:57 crc kubenswrapper[4743]: I1123 00:42:57.691745 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:57 crc kubenswrapper[4743]: I1123 00:42:57.727048 4743 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:58 crc kubenswrapper[4743]: I1123 00:42:58.673444 4743 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:42:58 crc kubenswrapper[4743]: I1123 00:42:58.731940 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-vfj7h"] Nov 23 00:43:00 crc kubenswrapper[4743]: I1123 00:43:00.642907 4743 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" podUID="b9c51b1b-d686-42d0-8661-bb06b9a1d355" containerName="registry-server" containerID="cri-o://bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5" gracePeriod=2 Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.049177 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.171254 4743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5vpx\" (UniqueName: \"kubernetes.io/projected/b9c51b1b-d686-42d0-8661-bb06b9a1d355-kube-api-access-p5vpx\") pod \"b9c51b1b-d686-42d0-8661-bb06b9a1d355\" (UID: \"b9c51b1b-d686-42d0-8661-bb06b9a1d355\") " Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.178600 4743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c51b1b-d686-42d0-8661-bb06b9a1d355-kube-api-access-p5vpx" (OuterVolumeSpecName: "kube-api-access-p5vpx") pod "b9c51b1b-d686-42d0-8661-bb06b9a1d355" (UID: "b9c51b1b-d686-42d0-8661-bb06b9a1d355"). InnerVolumeSpecName "kube-api-access-p5vpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.274243 4743 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5vpx\" (UniqueName: \"kubernetes.io/projected/b9c51b1b-d686-42d0-8661-bb06b9a1d355-kube-api-access-p5vpx\") on node \"crc\" DevicePath \"\"" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.654333 4743 generic.go:334] "Generic (PLEG): container finished" podID="b9c51b1b-d686-42d0-8661-bb06b9a1d355" containerID="bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5" exitCode=0 Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.654396 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" event={"ID":"b9c51b1b-d686-42d0-8661-bb06b9a1d355","Type":"ContainerDied","Data":"bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5"} Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.654414 4743 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.654444 4743 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-vfj7h" event={"ID":"b9c51b1b-d686-42d0-8661-bb06b9a1d355","Type":"ContainerDied","Data":"357ff9f4e0c2a91c959d08aadf0dc0cf552868c397979fe6f586c313e4baeb9d"} Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.654474 4743 scope.go:117] "RemoveContainer" containerID="bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.688359 4743 scope.go:117] "RemoveContainer" containerID="bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5" Nov 23 00:43:01 crc kubenswrapper[4743]: E1123 00:43:01.689864 4743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5\": container with ID starting with bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5 not found: ID does not exist" containerID="bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.689936 4743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5"} err="failed to get container status \"bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5\": rpc error: code = NotFound desc = could not find container \"bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5\": container with ID starting with bffa147bd03561eacae5888075bb550aed0bf4bf9078207e6a865219bf9d03e5 not found: ID does not exist" Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.694556 4743 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-vfj7h"] Nov 23 00:43:01 crc kubenswrapper[4743]: I1123 00:43:01.709515 4743 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-vfj7h"] Nov 23 00:43:02 crc kubenswrapper[4743]: I1123 00:43:02.737369 4743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c51b1b-d686-42d0-8661-bb06b9a1d355" path="/var/lib/kubelet/pods/b9c51b1b-d686-42d0-8661-bb06b9a1d355/volumes"